Recognition: 2 theorem links
· Lean TheoremSLE-FNO: Single-Layer Extensions for Task-Agnostic Continual Learning in Fourier Neural Operators
Pith reviewed 2026-05-15 08:02 UTC · model grok-4.3
The pith
Single-layer extensions added to Fourier Neural Operators enable continual learning across shifting fluid tasks with zero forgetting and few new parameters.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
SLE-FNO achieves accurate predictions on new out-of-distribution fluid tasks while producing zero forgetting on prior tasks and adding only minimal parameters, delivering a stronger plasticity-stability balance than EWC, LwF, replay buffers, OGD, GEM, PiggyBack, or LoRA in the four-task blood-flow sequence.
What carries the argument
The Single-Layer Extension (SLE) that appends lightweight task-specific layers to a frozen Fourier Neural Operator backbone, enabling task-agnostic updates without full retraining or data replay.
If this is right
- Surrogate models for pulsatile flows can be updated sequentially as new experimental conditions arise without storing prior simulation data.
- Computational cost for adapting to new geometries stays low because only a small number of extra parameters are introduced per task.
- Zero-forgetting performance holds across the tested sequence of distribution shifts in aneurysmal blood flow.
- Architecture-based continual learning outperforms or matches replay and regularization baselines in this spatial regression setting.
Where Pith is reading between the lines
- The same single-layer pattern could be tested on other neural operator families for time-dependent physics problems beyond blood flow.
- If single-layer capacity proves insufficient on wider task sequences, hybrid combinations with light replay buffers might become necessary.
- The approach implies that many scientific surrogate models could be maintained as living models rather than retrained from scratch when conditions change.
Load-bearing premise
Single-layer additions alone can capture all required adaptations for out-of-distribution fluid tasks without harming earlier performance.
What would settle it
Run SLE-FNO on a fifth blood-flow task whose geometry or regime lies further outside the training distribution and check whether accuracy on the original four tasks remains at the reported level with no measurable drop.
Figures
read the original abstract
Scientific machine learning is increasingly used to build surrogate models, yet most models are trained under a restrictive assumption in which future data follow the same distribution as the training set. In practice, new experimental conditions or simulation regimes may differ significantly, requiring extrapolation and model updates without re-access to prior data. This creates a need for continual learning (CL) frameworks that can adapt to distribution shifts while preventing catastrophic forgetting. Such challenges are pronounced in fluid dynamics, where changes in geometry, boundary conditions, or flow regimes induce non-trivial changes to the solution. Here, we introduce a new architecture-based approach (SLE-FNO) combining a Single-Layer Extension (SLE) with the Fourier Neural Operator (FNO) to support efficient CL. SLE-FNO was compared with a range of established CL methods, including Elastic Weight Consolidation (EWC), Learning without Forgetting (LwF), replay-based approaches, Orthogonal Gradient Descent (OGD), Gradient Episodic Memory (GEM), PiggyBack, and Low-Rank Adaptation (LoRA), within a spatial field-to-field regression setting. The models were trained to map transient concentration fields to time-averaged wall shear stress (TAWSS) in pulsatile aneurysmal blood flow. Tasks were derived from 230 computational fluid dynamics simulations grouped into four sequential and out-of-distribution configurations. Results show that replay-based methods and architecture-based approaches (PiggyBack, LoRA, and SLE-FNO) achieve the best retention, with SLE-FNO providing the strongest overall balance between plasticity and stability, achieving accuracy with zero forgetting and minimal additional parameters. Our findings highlight key differences between CL algorithms and introduce SLE-FNO as a promising strategy for adapting baseline models when extrapolation is required.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces SLE-FNO, an architecture-based continual learning method that augments the Fourier Neural Operator with a Single-Layer Extension (SLE) to enable adaptation to distribution shifts without catastrophic forgetting. It evaluates SLE-FNO against baselines including EWC, LwF, GEM, OGD, PiggyBack, and LoRA on a spatial field-to-field regression task: mapping transient concentration fields to time-averaged wall shear stress (TAWSS) using data from 230 CFD simulations grouped into four sequential out-of-distribution configurations derived from aneurysmal blood flow. The central claim is that SLE-FNO achieves accuracy with zero forgetting and minimal added parameters while providing the strongest overall balance between plasticity and stability.
Significance. If the results hold under broader testing, the work would be significant for providing an efficient, task-agnostic extension mechanism for FNOs in scientific machine learning applications involving non-stationary data, such as varying geometries or flow regimes in fluid dynamics. It could support the development of adaptive surrogate models that extrapolate without replay or heavy regularization, and the empirical comparison highlights practical differences among CL algorithms in this domain.
major comments (2)
- [Abstract] Abstract: The headline claims of 'accuracy with zero forgetting' and 'strongest overall balance' are presented without any quantitative metrics, error bars, statistical tests, or details on the magnitude of improvements over baselines, which is load-bearing for verifying the central performance assertions.
- [Abstract] Abstract (results paragraph): The evaluation is restricted to one fixed sequential ordering of four out-of-distribution tasks from the 230 CFD runs; the absence of ablations on task permutation, scaling beyond four tasks, or transfer to different geometries/Reynolds regimes undermines the claim that SLE-FNO is robustly task-agnostic.
minor comments (1)
- [Abstract] Abstract: The experimental setup description omits specifics on the four task configurations (e.g., exact changes in geometry or boundary conditions) and the precise implementation details for the baseline methods.
Simulated Author's Rebuttal
We thank the referee for the thoughtful and detailed review. The comments highlight important aspects of how the abstract presents our results and the scope of our evaluation. We address each point below and outline revisions that strengthen the manuscript without overstating our contributions.
read point-by-point responses
-
Referee: [Abstract] Abstract: The headline claims of 'accuracy with zero forgetting' and 'strongest overall balance' are presented without any quantitative metrics, error bars, statistical tests, or details on the magnitude of improvements over baselines, which is load-bearing for verifying the central performance assertions.
Authors: We agree that the abstract would be strengthened by including concrete quantitative support for these claims. In the revised version, we will incorporate specific metrics drawn from our experiments, such as the relative L2 errors on each task (e.g., SLE-FNO maintains <1% error with zero forgetting while baselines show 5-15% degradation), the number of additional parameters (under 2% of the base FNO), and standard deviations across repeated runs. We will also briefly note the magnitude of improvement over the strongest baselines (PiggyBack and LoRA) to make the performance assertions directly verifiable from the abstract. revision: yes
-
Referee: [Abstract] Abstract (results paragraph): The evaluation is restricted to one fixed sequential ordering of four out-of-distribution tasks from the 230 CFD runs; the absence of ablations on task permutation, scaling beyond four tasks, or transfer to different geometries/Reynolds regimes undermines the claim that SLE-FNO is robustly task-agnostic.
Authors: The referee correctly identifies that our experiments use a single task ordering. This ordering was selected to emulate a realistic progression of distribution shifts in aneurysmal flow modeling. Because SLE-FNO is purely architecture-based and adds task-agnostic single-layer extensions without relying on task identity, replay buffers, or regularization that depends on ordering, the method itself does not encode assumptions about sequence. Nevertheless, we did not perform permutation ablations or scale to more than four tasks owing to the substantial cost of generating additional high-fidelity CFD data. We will revise the abstract to replace 'robustly task-agnostic' with 'demonstrates strong task-agnostic adaptation in the evaluated setting' and will add a dedicated limitations paragraph in the discussion that explicitly acknowledges the restricted experimental scope while outlining directions for future validation on varied geometries and Reynolds numbers. revision: partial
Circularity Check
No circularity detected; purely empirical comparison with no derivations
full rationale
The manuscript introduces SLE-FNO as an architectural extension to FNO and reports empirical performance on a fixed sequence of four out-of-distribution tasks derived from 230 CFD runs. No mathematical derivations, uniqueness theorems, or first-principles predictions are claimed; results consist of accuracy, forgetting, and parameter-count metrics obtained by training and evaluating the models on the described data. Consequently there are no steps that reduce by construction to fitted inputs, self-citations, or renamed ansatzes. The work is self-contained as an experimental benchmark.
Axiom & Free-Parameter Ledger
invented entities (1)
-
Single-Layer Extension (SLE)
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
SLE-FNO keeps pretrained FNO backbone frozen and adapts by introducing a single task-specific FNO layer that produces a residual correction Z_SLE-FNO added to the frozen backbone output.
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanabsolute_floor_iff_bare_distinguishability unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
KPCA reconstruction error used for task-agnostic routing and OOD detection with percentile threshold τ.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 1 Pith paper
-
Replay-Based Continual Learning for Physics-Informed Neural Operators
A replay-based continual learning strategy for physics-informed neural operators mitigates catastrophic forgetting on prior physical problems while enabling efficient adaptation to new data using only physical constraints.
Reference graph
Works this paper leans on
-
[1]
J. Thiyagalingam, M. Shankar, G. Fox, and T. Hey. Scientific machine learning benchmarks. Nature Reviews Physics, 4(6):413–420, 2022
work page 2022
-
[2]
G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang. Physics- informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021
work page 2021
-
[3]
S. L. Brunton and J. N. Kutz.Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press, 2022
work page 2022
-
[4]
Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anand- kumar. Fourier neural operator for parametric partial differential equations.arXiv preprint arXiv:2010.08895, 2020
work page internal anchor Pith review Pith/arXiv arXiv 2010
-
[5]
L. Lu, P. Jin, G. Pang, Z. Zhang, and G. E. Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3(3):218–229, 2021
work page 2021
-
[6]
N. Kovachki, Z. Li, B. Liu, K. Azizzadenesheli, K. Bhattacharya, A. Stuart, and A. Anand- kumar. Neural operator: Learning maps between function spaces with applications to PDEs. Journal of Machine Learning Research, 24(89):1–97, 2023
work page 2023
- [7]
-
[8]
S. Goswami, A. Bora, Y. Yu, and G. E. Karniadakis. Physics-informed deep neural operator networks. InMachine Learning in Modeling and Simulation: Methods and Applications, pages 219–254. Springer, 2023. 45
work page 2023
-
[9]
S. Wang, Y. Teng, and P. Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055– A3081, 2021
work page 2021
- [10]
- [11]
-
[12]
S. S. Menon, T. Mondal, S. Brahmachary, A. Panda, S. M. Joshi, K. Kalyanaraman, and A. D. Jagtap. On scientific foundation models: Rigorous definitions, key applications, and a survey.SSRN, 2025
work page 2025
-
[13]
J. Quiñonero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence.Dataset shift in machine learning. MIT Press, 2008
work page 2008
-
[14]
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks.arXiv preprint arXiv:1610.02136, 2016
work page internal anchor Pith review Pith/arXiv arXiv 2016
-
[15]
J. Yang, K. Zhou, Y. Li, and Z. Liu. Generalized out-of-distribution detection: A survey. International Journal of Computer Vision, 132(12):5635–5662, 2024
work page 2024
-
[16]
L. Yuan, H. S. Park, and E. Lejeune. Towards out of distribution generalization for problems in mechanics.Computer Methods in Applied Mechanics and Engineering, 400:115569, 2022
work page 2022
- [17]
-
[18]
Y. M. K. W. Church, Z. Chen. Emerging trends: A gentle introduction to fine-tuning.Natural Language Engineering, 27(6):763–778, 2021
work page 2021
-
[19]
R. M. French. Catastrophic forgetting in connectionist networks.Trends in Cognitive Sci- ences, 3(4):128–135, 1999
work page 1999
-
[20]
J. Gama, I. Žliobait˙ e, A. Bifet, M. Pechenizkiy, and A. Bouchachia. A survey on concept drift adaptation.ACM computing surveys (CSUR), 46(4):1–37, 2014
work page 2014
- [21]
-
[22]
Y.Ovadia, E.Fertig, J.Ren, Z.Nado, D.Sculley, S.Nowozin, J.Dillon, B.Lakshminarayanan, and J. Snoek. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift.Advances in Neural Information Processing Systems, 32, 2019
work page 2019
-
[23]
P. W. Koh, S. Sagawa, H. Marklund, S. M. Xie, M. Zhang, A. Balsubramani, W. Hu, M. Ya- sunaga, R. L. Phillips, I. Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. InInternational Conference on Machine Learning, pages 5637–5664. PMLR, 2021
work page 2021
-
[24]
R. Hadsell, D. Rao, A. A. Rusu, and R. Pascanu. Embracing change: Continual learning in deep neural networks.Trends in Cognitive Sciences, 24(12):1028–1040, 2020. 46
work page 2020
-
[25]
M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars. A continual learning survey: Defying forgetting in classification tasks.IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7):3366–3385, 2021
work page 2021
-
[26]
L. Wang, X. Zhang, H. Su, and J. Zhu. A comprehensive survey of continual learning: Theory, method and application.IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5362–5383, 2024
work page 2024
-
[27]
G. M. Van de Ven and A. S. Tolias. Three scenarios for continual learning.arXiv preprint arXiv:1904.07734, 2019
work page internal anchor Pith review Pith/arXiv arXiv 1904
- [28]
-
[29]
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks.Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017
work page 2017
-
[30]
D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, and G. Wayne. Experience replay for continual learning.Advances in Neural Information Processing Systems, 32, 2019
work page 2019
-
[31]
H. Shin, J. K. Lee, J. Kim, and J. Kim. Continual learning with deep generative replay. Advances in Neural Information Processing Systems, 30, 2017
work page 2017
-
[32]
M. Farajtabar, N. Azizan, A. Mott, and A. Li. Orthogonal gradient descent for continual learning.arXiv preprint arXiv:1910.07104, 2020
-
[33]
D. Lopez-Paz and M. Ranzato. Gradient episodic memory for continual learning.Advances in Neural Information Processing Systems, 30, 2017
work page 2017
- [34]
-
[35]
S. Lazebnik A. Mallya. Packnet: Adding multiple tasks to a single network by iterative pruning. InProceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7765–7773, 2018
work page 2018
- [36]
-
[37]
A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell. Progressive neural networks.arXiv preprint arXiv:1606.04671, 2016
work page internal anchor Pith review Pith/arXiv arXiv 2016
-
[38]
D. Rao, F. Visin, A. Rusu, R. Pascanu, Y. W. Teh, and R. Hadsell. Continual unsupervised representation learning.Advances in Neural Information Processing Systems, 32, 2019
work page 2019
-
[39]
H. Cha, J. Lee, and J. Shin. Co2l: Contrastive continual learning. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 9516–9525, 2021
work page 2021
-
[40]
D. Zhou, Q. Wang, Z. Qi, H. Ye, D. Zhan, and Z. Liu. Class-incremental learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 47
work page 2024
-
[41]
G. M. Van de Ven, T. Tuytelaars, and A. S. Tolias. Three types of incremental learning. Nature Machine Intelligence, 4(12):1185–1197, 2022
work page 2022
- [42]
-
[43]
Q. Besnard and N. Ragot. Continual learning for time series forecasting: A first survey. Engineering Proceedings, 68(1):49, 2024
work page 2024
- [44]
- [45]
-
[46]
T. Tripura and S. Chakraborty. Neural combinatorial wavelet neural operator for catas- trophic forgetting free in-context operator learning of multiple partial differential equations. Computer Physics Communications, page 109882, 2025
work page 2025
-
[47]
K. M. Samuel and F. Ahmed. Continual learning strategies for 3D engineering regression problems: A benchmarking study.Journal of Computing and Information Science in Engi- neering, 25(10):101003, 2025
work page 2025
-
[48]
H. Kang, J. Yoon, S. J. Hwang, and C. D. Yoo. Continual learning: Forget-free winning subnetworks for video representations.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
work page 2024
-
[49]
M. Mahmoudi, A. Farghadan, D. R. McConnell, A. J. Barker, J. J. Wentzel, M. J. Budoff, and A. Arzani. The story of wall shear stress in coronary artery atherosclerosis: biochemical transport and mechanotransduction.Journal of Biomechanical Engineering, 143(4):041002, 2021
work page 2021
-
[50]
M. L. Raghavan, B. Ma, and R. E. Harbaugh. Quantified aneurysm shape and rupture risk. Journal of Neurosurgery, 102(2):355–362, 2005
work page 2005
-
[51]
J. R. Womersley. Method for the calculation of velocity, rate of flow and viscous drag in arteries when the pressure gradient is known.The Journal of Physiology, 127(3):553, 1955
work page 1955
-
[52]
D. N. Ku, D. P. Giddens, C. K. Zarins, and S. Glagov. Pulsatile flow and atherosclerosis in the human carotid bifurcation. positive correlation between plaque location and low oscillating shear stress.Arteriosclerosis: An Official Journal of the American Heart Association, Inc., 5(3):293–302, 1985
work page 1985
-
[53]
A. M. Malek, S. L. Alper, and S. Izumo. Hemodynamic shear stress and its role in atheroscle- rosis.Jama, 282(21):2035–2042, 1999
work page 2035
-
[54]
A. Arzani. Accounting for residence-time in blood rheology models: do we really need non- newtonian blood flow modelling in large arteries?Journal of The Royal Society Interface, 15(146):20180486, 2018
work page 2018
-
[55]
B. Staarmann, M. Smith, and C. J. Prestigiacomo. Shear stress and aneurysms: a review. Neurosurgical Focus, 47(1):E2, 2019. 48
work page 2019
-
[56]
Ayachit.The paraview guide: a parallel visualization application
U. Ayachit.The paraview guide: a parallel visualization application. Kitware, Inc., 2015
work page 2015
-
[57]
W. M. Czarnecki, S. Osindero, M. Jaderberg, G. Swirszcz, and R. Pascanu. Sobolev training for neural networks.Advances in Neural Information Processing Systems, 30, 2017
work page 2017
-
[58]
I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks.arXiv preprint arXiv:1312.6211, 2013
work page internal anchor Pith review Pith/arXiv arXiv 2013
- [59]
-
[60]
L. Y. Pratt. Discriminability-based transfer between neural networks.Advances in Neural Information Processing Systems, 5, 1992
work page 1992
-
[61]
P. Buzzega, M. Boschini, A. Porrello, D. Abati, and S. Calderara. Dark experience for general continual learning: a strong, simple baseline.Advances in Neural Information Processing Systems, 33:15920–15930, 2020
work page 2020
- [62]
-
[63]
Efficient Lifelong Learning with A-GEM
A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny. Efficient lifelong learning with A-GEM.arXiv preprint arXiv:1812.00420, 2018
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[64]
E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, et al. LoRA: Low-rank adaptation of large language models.ICLR, 1(2):3, 2022
work page 2022
-
[65]
J. O. Zhang, A. Sax, A. Zamir, L. Guibas, and J. Malik. Side-tuning: a baseline for network adaptation via additive side networks. InEuropean Conference on Computer Vision, pages 698–714. Springer, 2020
work page 2020
-
[66]
N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A.Courville. On thespectralbiasof neuralnetworks. InInternational Conference on Machine Learning, pages 5301–5310. PMLR, 2019
work page 2019
- [67]
-
[68]
D. Ha, A. Dai, and Q. V. Le. Hypernetworks.arXiv preprint arXiv:1609.09106, 2016
work page internal anchor Pith review Pith/arXiv arXiv 2016
-
[69]
R. Aljundi, P. Chakravarty, and T. Tuytelaars. Expert gate: Lifelong learning with a net- work of experts. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3366–3375, 2017
work page 2017
-
[70]
H. Zhu, M. Majzoubi, A. Jain, and A. Choromanska. TAME: Task agnostic continual learning using multiple experts. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4139–4148, 2024. 49
work page 2024
-
[71]
G. Kim, S. Esmaeilpour, C. Xiao, and B. Liu. Continual learning based on ood detection and task masking. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3856–3866, 2022
work page 2022
-
[72]
M. Wortsman, V. Ramanujan, R. Liu, A. Kembhavi, M. Rastegari, J. Yosinski, and A. Farhadi. Supermasks in superposition.Advances in Neural Information Processing Sys- tems, 33:15173–15184, 2020
work page 2020
-
[73]
Nonlinearcomponentanalysisasakerneleigenvalue problem.Neural Computation, 10(5):1299–1319, 1998
B.Schölkopf, A.Smola, andK.R.Müller. Nonlinearcomponentanalysisasakerneleigenvalue problem.Neural Computation, 10(5):1299–1319, 1998
work page 1998
-
[74]
B. Schölkopf, R. C. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt. Support vector method for novelty detection.Advances in Neural Information Processing Systems, 12, 1999
work page 1999
-
[75]
K. Lee, K. Lee, H. Lee, and J. Shin. A simple unified framework for detecting out-of- distribution samples and adversarial attacks.Advances in Neural Information Processing Systems, 31, 2018
work page 2018
-
[76]
W. Liu, X. Wang, J. Owens, and Y. Li. Energy-based out-of-distribution detection.Advances in Neural Information Processing Systems, 33:21464–21475, 2020
work page 2020
-
[77]
K. Fang, Q. Tao, K. Lv, M. He, X. Huang, and J. Yang. Kernel PCA for out-of-distribution detection.Advances in Neural Information Processing Systems, 37:134317–134344, 2024
work page 2024
-
[78]
A. Rahimi and B. Recht. Random features for large-scale kernel machines.Advances in Neural Information Processing Systems, 20, 2007
work page 2007
-
[79]
M. Takamoto, T. Praditia, R. Leiteritz, D. MacKinlay, F. Alesiani, D. Pflüger, and M. Niepert. Pdebench: An extensive benchmark for scientific machine learning.Advances in Neural Information Processing Systems, 35:1596–1611, 2022
work page 2022
-
[80]
H. Semmelrock, T. Ross-Hellauer, S. Kopeinik, D. Theiler, A. Haberl, S. Thalmann, and D. Kowald. Reproducibility in machine-learning-based research: Overview, barriers, and drivers.AI Magazine, 46(2):e70002, 2025
work page 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.