pith. machine review for the scientific record. sign in

arxiv: 2605.10654 · v1 · submitted 2026-05-11 · 💻 cs.LG · cs.AI

Recognition: 2 theorem links

· Lean Theorem

Active Learning for Gaussian Process Regression Under Self-Induced Boltzmann Weights

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:18 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords active learningGaussian process regressionBoltzmann distributionpotential energy surfacesacquisition functionsThompson samplingcomputational chemistrydrug discovery
0
0 comments X

The pith

A Gaussian process acquisition function learns functions under self-induced Boltzmann weights without estimating the partition function.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops an active learning approach for an unknown function whose prediction error is measured under a Boltzmann distribution that the function itself defines. Such self-induced weighting occurs in potential energy surface modeling for chemistry, where the distribution is both unknown and has an intractable partition function. AB-SID-iVAR is introduced as a Gaussian process acquisition function that approximates the target distribution in closed form. The method comes with a guarantee that prediction error vanishes with high probability under mild conditions, plus a stronger average-case bound. Experiments show gains over baselines on synthetic data and real chemistry and drug discovery problems.

Core claim

We propose AB-SID-iVAR, a Gaussian Process-based acquisition function that approximates the intractable Bayesian target distribution in closed form while avoiding partition function estimation, and is applicable to both discrete and continuous input domains. Despite the unknown target, under mild conditions, we establish that the terminal prediction error vanishes with high probability, and provide a tighter average-case guarantee.

What carries the argument

AB-SID-iVAR, a Gaussian Process-based acquisition function that approximates the intractable Bayesian target distribution in closed form while avoiding partition function estimation

Load-bearing premise

The mild conditions required for the vanishing prediction error guarantee must hold, and the closed-form approximation to the Bayesian target must be sufficiently accurate.

What would settle it

Observe whether the prediction error under the self-induced Boltzmann distribution reaches near zero after a finite number of queries on a real potential energy surface task, or if it plateaus at a positive value.

Figures

Figures reproduced from arXiv: 2605.10654 by Henry Moss, Jixiang Qing, Matthias Sachs.

Figure 1
Figure 1. Figure 1: Spectrum of AL methods by knowledge of the target distribu [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparison of active learning methods on the (continuous input) Gramacy 2D function [Gramacy and Lee, [PITH_FULL_IMAGE:figures/full_fig_p002_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Weighted MSE convergence on synthetic benchmarks across dimensions [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Weighted MSE convergence on four PES fitting tasks of increasing dimensionality: [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Active learning for molecular property prediction on GuacaMol scoring functions (Median 1, Median 2). [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Acquisition function comparison on a 2D GP prior. We use Monte Carlo with 1024 samples (MC-SID, [PITH_FULL_IMAGE:figures/full_fig_p015_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Weighted MSE convergence on synthetic benchmarks with discrete input domains across dimensions [PITH_FULL_IMAGE:figures/full_fig_p026_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Ablation study comparing surrogate forms, acquisition strategies, and constraint sets. [PITH_FULL_IMAGE:figures/full_fig_p027_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Sensitivity of SMC particle size N ∈ {250d, 500d, 1000d, 1500d} on GP prior functions across d ∈ {2, 4, 6, 8}. Top: weighted MSE convergence; curves overlap across particle counts. Bottom: per-iteration MCMC time, scaling linearly with N. D.8 Sensitivity Analysis of λ and b We investigate sensitivity to the temperature parameter λ and bias function b(x) on functions drawn from a 2 dimensional GP prior acro… view at source ↗
Figure 10
Figure 10. Figure 10: Sensitivity analysis of temperature [PITH_FULL_IMAGE:figures/full_fig_p030_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Pairwise comparison of predicted GP posterior mean [PITH_FULL_IMAGE:figures/full_fig_p031_11.png] view at source ↗
read the original abstract

We consider the active learning problem where the goal is to learn an unknown function with low prediction error under an unknown Boltzmann distribution induced by the function itself. This self-induced weighting arises naturally in problems such as potential energy surface (PES) modeling in computational chemistry, yet poses unique challenges as the target distribution is unknown and its partition function is intractable. We propose \texttt{AB-SID-iVAR}, a Gaussian Process-based acquisition function that approximates the intractable Bayesian target distribution in closed form while avoiding partition function estimation, and is applicable to both discrete and continuous input domains. We also analyze a Thompson sampling alternative (\texttt{TS-SID-iVAR}) as a higher variance Monte Carlo variant. Despite the unknown target, under mild conditions, we establish that the terminal prediction error vanishes with high probability, and provide a tighter average-case guarantee. We demonstrate consistent improvements over existing approaches in this setting on synthetic benchmarks and real-world PES modeling and drug discovery tasks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper addresses active learning for Gaussian process regression when the target distribution is a self-induced Boltzmann weight depending on the unknown function itself (as in PES modeling). It proposes AB-SID-iVAR, a closed-form GP acquisition function that approximates the intractable Bayesian target without estimating the partition function Z, together with a Thompson-sampling variant TS-SID-iVAR. Under unspecified mild conditions the authors prove that terminal prediction error vanishes with high probability and supply a tighter average-case bound; empirical results on synthetic benchmarks and real PES/drug-discovery tasks show consistent gains over baselines.

Significance. If the closed-form approximation remains sufficiently faithful to the true f-dependent target throughout the active-learning loop and the mild conditions can be verified in practice, the work would be significant for domains where the sampling distribution is induced by the unknown function. The avoidance of partition-function estimation and the provision of both high-probability and average-case consistency results are notable strengths; the empirical validation on chemistry-relevant tasks further supports potential utility.

major comments (2)
  1. [Abstract and §4] Abstract and §4 (theoretical analysis): the central guarantee that 'the terminal prediction error vanishes with high probability' under 'mild conditions' is load-bearing for the contribution, yet the abstract and the statement of the theorem provide neither an explicit list of those conditions nor a quantitative bound on the approximation error between the closed-form AB-SID-iVAR target and the true self-induced posterior. Without such a bound it is unclear whether the approximation error contracts at a rate compatible with the GP consistency argument when the posterior is still diffuse.
  2. [§3.2] §3.2 (definition of AB-SID-iVAR): the claim that the acquisition function 'approximates the intractable Bayesian target distribution in closed form while avoiding partition function estimation' requires a precise statement of the approximation (e.g., which moments or variational family are used) and a proof that the resulting acquisition remains sufficiently close to the true expected information gain for the consistency result to carry through.
minor comments (2)
  1. [§2] Notation for the self-induced Boltzmann weight and the induced measure should be introduced once and used consistently; several passages switch between p(f) and the normalized weight without explicit re-definition.
  2. [§5] Figure captions for the PES and drug-discovery experiments should state the number of independent runs, the precise definition of 'prediction error', and whether error bars represent standard deviation or standard error.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful and constructive review. The comments correctly identify areas where the theoretical presentation can be strengthened for clarity. We address each point below and will revise the manuscript to incorporate explicit conditions, bounds, and proofs as detailed in our responses.

read point-by-point responses
  1. Referee: [Abstract and §4] Abstract and §4 (theoretical analysis): the central guarantee that 'the terminal prediction error vanishes with high probability' under 'mild conditions' is load-bearing for the contribution, yet the abstract and the statement of the theorem provide neither an explicit list of those conditions nor a quantitative bound on the approximation error between the closed-form AB-SID-iVAR target and the true self-induced posterior. Without such a bound it is unclear whether the approximation error contracts at a rate compatible with the GP consistency argument when the posterior is still diffuse.

    Authors: We agree that the mild conditions should be stated explicitly and that a quantitative bound on the approximation error is needed to fully support the consistency claim. In the revised version we will list the conditions explicitly in the abstract and in the theorem statement of §4 (compact input domain, continuous kernel with bounded variance, and Lipschitz continuity of the target function). We will also add a supporting lemma bounding the pointwise difference between the AB-SID-iVAR acquisition and the true self-induced expected information gain by a term proportional to the maximum posterior standard deviation; this term contracts as the GP posterior concentrates, ensuring compatibility with the high-probability vanishing-error argument even while the posterior remains diffuse early in the loop. revision: yes

  2. Referee: [§3.2] §3.2 (definition of AB-SID-iVAR): the claim that the acquisition function 'approximates the intractable Bayesian target distribution in closed form while avoiding partition function estimation' requires a precise statement of the approximation (e.g., which moments or variational family are used) and a proof that the resulting acquisition remains sufficiently close to the true expected information gain for the consistency result to carry through.

    Authors: We will revise §3.2 to give a precise definition: AB-SID-iVAR replaces the intractable self-induced Boltzmann weights with a moment-matched Gaussian approximation constructed from the current GP posterior mean and variance at each candidate point, thereby avoiding any estimation of the partition function Z. We will add a proposition proving that the resulting acquisition differs from the true expected information gain by an error controlled by the GP posterior variance; under the mild conditions already used for consistency, this error is small enough that the high-probability terminal-error guarantee continues to hold. The proof will be placed immediately after the definition in §3.2 and referenced in §4. revision: yes

Circularity Check

0 steps flagged

No circularity: novel closed-form approximation and consistency result are independently derived

full rationale

The paper defines a new acquisition function AB-SID-iVAR that constructs a closed-form approximation to the self-induced Boltzmann target without using partition functions, then separately states a consistency theorem that the terminal GP error vanishes whp under mild conditions. Neither the approximation nor the guarantee is obtained by fitting a parameter to data and relabeling it a prediction, nor by self-citation that reduces the central claim to an unverified prior result of the same authors. The derivation chain therefore remains self-contained against external GP theory and active-learning benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Review performed on abstract only; no explicit free parameters, axioms, or invented entities are stated in the provided text.

pith-pipeline@v0.9.0 · 5457 in / 1092 out tokens · 41594 ms · 2026-05-12T04:18:26.796350+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

89 extracted references · 89 canonical work pages · 2 internal anchors

  1. [1]

    Physical Chemistry Chemical Physics , volume=

    Ab initio investigation of the role of the d-states occupation on the adsorption properties of H 2, CO, CH 4 and CH 3 OH on the Fe 13, Co 13, Ni 13 and Cu 13 clusters , author=. Physical Chemistry Chemical Physics , volume=. 2021 , publisher=

  2. [2]

    2012 , publisher=

    Modern quantum chemistry: introduction to advanced electronic structure theory , author=. 2012 , publisher=

  3. [3]

    The Journal of Chemical Physics , volume=

    The quantum dynamics of H2 on Cu (111) at a surface temperature of 925 K: Comparing state-of-the-art theory to state-of-the-art experiments 2 , author=. The Journal of Chemical Physics , volume=. 2023 , publisher=

  4. [4]

    Advances in neural information processing systems , volume=

    Direct preference optimization: Your language model is secretly a reward model , author=. Advances in neural information processing systems , volume=

  5. [5]

    New Journal of Physics , volume=

    Properties of metal--water interfaces studied from first principles , author=. New Journal of Physics , volume=

  6. [6]

    Physical review B , volume=

    Computer simulation of local order in condensed phases of silicon , author=. Physical review B , volume=. 1985 , publisher=

  7. [7]

    The Journal of chemical physics , volume=

    A foundation model for atomistic materials chemistry , author=. The Journal of chemical physics , volume=. 2025 , publisher=

  8. [8]

    Machine Learning: Science and Technology , volume=

    Benchmarking of machine learning interatomic potentials for reactive hydrogen dynamics at metal surfaces , author=. Machine Learning: Science and Technology , volume=. 2024 , publisher=

  9. [9]

    The Journal of chemical physics , volume=

    Structure and shape variations in intermediate-size copper clusters , author=. The Journal of chemical physics , volume=. 2006 , publisher=

  10. [10]

    arXiv preprint arXiv:2409.09787 , year=

    Bnem: A boltzmann sampler based on bootstrapped noised energy matching , author=. arXiv preprint arXiv:2409.09787 , year=

  11. [11]

    arXiv preprint arXiv:2603.17579 , year=

    One-Step Sampler for Boltzmann Distributions via Drifting , author=. arXiv preprint arXiv:2603.17579 , year=

  12. [12]

    2023 , publisher=

    Understanding molecular simulation: from algorithms to applications , author=. 2023 , publisher=

  13. [13]

    Journal of artificial intelligence research , volume=

    Active learning with statistical models , author=. Journal of artificial intelligence research , volume=

  14. [14]

    Neural computation , volume=

    Information-based objective functions for active data selection , author=. Neural computation , volume=. 1992 , publisher=

  15. [15]

    npj Computational Materials , volume=

    Hyperactive learning for data-driven interatomic potentials , author=. npj Computational Materials , volume=. 2023 , publisher=

  16. [16]

    Computational Statistics & Data Analysis , volume=

    Gaussian processes and limiting linear models , author=. Computational Statistics & Data Analysis , volume=. 2008 , publisher=

  17. [17]

    Advances in Neural Information Processing Systems , volume=

    Bayesian active learning with fully Bayesian Gaussian processes , author=. Advances in Neural Information Processing Systems , volume=

  18. [18]

    Bayesian active learning for classification and preferenc e learning,

    Bayesian active learning for classification and preference learning , author=. arXiv preprint arXiv:1112.5745 , year=

  19. [19]

    Gaussian process optimization in the bandit setting: No regret and experimental design,

    Gaussian process optimization in the bandit setting: No regret and experimental design , author=. arXiv preprint arXiv:0912.3995 , year=

  20. [20]

    arXiv preprint arXiv:2502.16870 , year=

    Distributionally Robust Active Learning for Gaussian Process Regression , author=. arXiv preprint arXiv:2502.16870 , year=

  21. [21]

    Advances in neural information processing systems , volume=

    Sampling for inference in probabilistic models with fast Bayesian quadrature , author=. Advances in neural information processing systems , volume=

  22. [22]

    arXiv preprint arXiv:2506.16471 , year=

    Progressive Inference-Time Annealing of Diffusion Models for Sampling from Boltzmann Densities , author=. arXiv preprint arXiv:2506.16471 , year=

  23. [23]

    arXiv preprint arXiv:2104.02822 , year=

    Low-regret active learning , author=. arXiv preprint arXiv:2104.02822 , year=

  24. [24]

    the Annals of Probability , pages=

    On tail probabilities for martingales , author=. the Annals of Probability , pages=. 1975 , publisher=

  25. [25]

    The journal of chemical physics , volume=

    Equation of state calculations by fast computing machines , author=. The journal of chemical physics , volume=. 1953 , publisher=

  26. [26]

    International conference on neural information processing , pages=

    Contextual bandit for active learning: Active thompson sampling , author=. International conference on neural information processing , pages=. 2014 , organization=

  27. [27]

    Exponential convergence of Langevin distributions and their discrete approximations , author=

  28. [28]

    Advances in Neural Information Processing Systems , volume=

    Unexpected improvements to expected improvement for bayesian optimization , author=. Advances in Neural Information Processing Systems , volume=

  29. [29]

    arXiv preprint arXiv:2402.02229 , year=

    Vanilla Bayesian optimization performs great in high dimensions , author=. arXiv preprint arXiv:2402.02229 , year=

  30. [30]

    Uncertainty in Artificial Intelligence , pages=

    No-regret approximate inference via Bayesian optimisation , author=. Uncertainty in Artificial Intelligence , pages=. 2021 , organization=

  31. [31]

    International conference on artificial intelligence and statistics , pages=

    Prediction-oriented bayesian active learning , author=. International conference on artificial intelligence and statistics , pages=. 2023 , organization=

  32. [32]

    Journal of Computational Physics , volume=

    Bayesian optimization with output-weighted optimal sampling , author=. Journal of Computational Physics , volume=. 2021 , publisher=

  33. [33]

    Advances in neural information processing systems , volume=

    Adversarially robust optimization with Gaussian processes , author=. Advances in neural information processing systems , volume=

  34. [34]

    Journal of Machine Learning Research , volume=

    Convergence guarantees for Gaussian process means with misspecified likelihoods and smoothness , author=. Journal of Machine Learning Research , volume=

  35. [35]

    International Conference on Machine Learning , pages=

    Efficiently sampling functions from Gaussian process posteriors , author=. International Conference on Machine Learning , pages=. 2020 , organization=

  36. [36]

    arXiv preprint arXiv:2510.23681 , year=

    Informed Initialization for Bayesian Optimization and Active Learning , author=. arXiv preprint arXiv:2510.23681 , year=

  37. [37]

    Statistical Science , volume=

    Modern Bayesian experimental design , author=. Statistical Science , volume=. 2024 , publisher=

  38. [38]

    arXiv preprint arXiv:2311.14645 , year=

    A general framework for user-guided Bayesian optimization , author=. arXiv preprint arXiv:2311.14645 , year=

  39. [39]

    Journal of Machine Learning Research , volume=

    Pathwise conditioning of Gaussian processes , author=. Journal of Machine Learning Research , volume=

  40. [40]

    Annals of statistics , pages=

    Adaptive estimation of a quadratic functional by model selection , author=. Annals of statistics , pages=. 2000 , publisher=

  41. [41]

    Conference On Learning Theory , pages=

    Information directed sampling and bandits with heteroscedastic noise , author=. Conference On Learning Theory , pages=. 2018 , organization=

  42. [42]

    2023 , publisher =

    Garnett, Roman , title =. 2023 , publisher =

  43. [43]

    arXiv preprint arXiv:2506.17366 , year=

    Gaussian Processes and Reproducing Kernels: Connections and Equivalences , author=. arXiv preprint arXiv:2506.17366 , year=

  44. [44]

    Technometrics , number=

    Targeted Variance Reduction: Effective Bayesian Optimization of Black-Box Simulators with Noise Parameters , author=. Technometrics , number=. 2025 , publisher=

  45. [45]

    Journal of Machine Learning Research , year =

    Mert Gurbuzbalaban and Yuanhan Hu and Lingjiong Zhu , title =. Journal of Machine Learning Research , year =

  46. [46]

    Communications in Statistics-Theory and Methods , volume=

    Posterior contraction rates for constrained deep Gaussian processes in density estimation and classification , author=. Communications in Statistics-Theory and Methods , volume=. 2025 , publisher=

  47. [47]

    Journal of statistical planning and inference , volume=

    Posterior consistency of logistic Gaussian process priors in density estimation , author=. Journal of statistical planning and inference , volume=. 2007 , publisher=

  48. [48]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Kernelized normalizing constant estimation: bridging Bayesian quadrature and Bayesian optimization , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  49. [49]

    Advances in Neural Information Processing Systems , volume=

    Tanimoto random features for scalable molecular machine learning , author=. Advances in Neural Information Processing Systems , volume=

  50. [50]

    Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

    Advantage-weighted regression: Simple and scalable off-policy reinforcement learning , author=. arXiv preprint arXiv:1910.00177 , year=

  51. [51]

    International Conference on Machine Learning , pages=

    Loss-guided diffusion models for plug-and-play controllable generation , author=. International Conference on Machine Learning , pages=. 2023 , organization=

  52. [52]

    Journal of chemical information and modeling , volume=

    GuacaMol: benchmarking models for de novo molecular design , author=. Journal of chemical information and modeling , volume=. 2019 , publisher=

  53. [53]

    Journal of chemical theory and computation , volume=

    Exploration, sampling, and reconstruction of free energy surfaces with Gaussian process regression , author=. Journal of chemical theory and computation , volume=. 2016 , publisher=

  54. [54]

    arXiv preprint arXiv:2403.03816 , year=

    Targeted variance reduction: Robust Bayesian optimization of black-box simulators with noise parameters , author=. arXiv preprint arXiv:2403.03816 , year=

  55. [55]

    Posterior consistency of Gaussian process prior for nonparametric binary regression , author=

  56. [56]

    Advances in neural information processing systems , volume=

    Bayesian optimization with exponential convergence , author=. Advances in neural information processing systems , volume=

  57. [57]

    2006 , publisher=

    Gaussian processes for machine learning , author=. 2006 , publisher=

  58. [58]

    arXiv preprint arXiv:2106.11719 , year=

    Test distribution-aware active learning: A principled approach against distribution shift and outliers , author=. arXiv preprint arXiv:2106.11719 , year=

  59. [59]

    Engineering with Computers , volume=

    Adaptive sampling with automatic stopping for feasible region identification in engineering design , author=. Engineering with Computers , volume=. 2022 , publisher=

  60. [60]

    Inference for L

    Jasra, Ajay and Stephens, David A and Doucet, Arnaud and Tsagaris, Theodoros , journal=. Inference for L. 2011 , publisher=

  61. [61]

    Statistics and Computing , volume=

    Sequential Monte Carlo on large binary sampling spaces , author=. Statistics and Computing , volume=. 2013 , publisher=

  62. [62]

    Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=

    Sequential monte carlo samplers , author=. Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=. 2006 , publisher=

  63. [63]

    1999 , publisher=

    Monte Carlo statistical methods , author=. 1999 , publisher=

  64. [64]

    Statistical science , volume=

    Design and analysis of computer experiments , author=. Statistical science , volume=. 1989 , publisher=

  65. [65]

    Forschungsbericht- Deutsche Forschungs- und Versuchsanstalt fur Luft- und Raumfahrt , year=

    A software package for sequential quadratic programming , author=. Forschungsbericht- Deutsche Forschungs- und Versuchsanstalt fur Luft- und Raumfahrt , year=

  66. [66]

    Iterated denoising energy matching for sampling from boltzmann densities

    Iterated denoising energy matching for sampling from boltzmann densities , author=. arXiv preprint arXiv:2402.06121 , year=

  67. [67]

    The Journal of Physical Chemistry Letters , volume=

    On-the-fly active learning of interatomic potentials for large-scale atomistic simulations , author=. The Journal of Physical Chemistry Letters , volume=. 2020 , publisher=

  68. [68]

    npj Computational Materials , volume=

    On-the-fly active learning of interpretable Bayesian force fields for atomistic rare events , author=. npj Computational Materials , volume=. 2020 , publisher=

  69. [69]

    Computational Materials Science , volume=

    Active learning of linearly parametrized interatomic potentials , author=. Computational Materials Science , volume=. 2017 , publisher=

  70. [70]

    Physical review letters , volume=

    Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces , author=. Physical review letters , volume=. 2015 , publisher=

  71. [71]

    Advances in Neural Information Processing Systems , volume=

    Transductive active learning: Theory and applications , author=. Advances in Neural Information Processing Systems , volume=

  72. [72]

    Proceedings of the 23rd international conference on Machine learning , pages=

    Active learning via transductive experimental design , author=. Proceedings of the 23rd international conference on Machine learning , pages=

  73. [73]

    Acta Numerica , volume=

    Optimal experimental design: Formulations and computations , author=. Acta Numerica , volume=. 2024 , publisher=

  74. [74]

    npj Computational Materials , volume=

    Uncertainty driven active learning of coarse grained free energy models , author=. npj Computational Materials , volume=. 2024 , publisher=

  75. [75]

    Frogner, Charlie and Claici, Sebastian and Chien, Edward and Solomon, Justin , title =. J. Mach. Learn. Res. , month = jan, articleno =. 2021 , issue_date =

  76. [76]

    2009 , publisher=

    Active learning literature survey , author=. 2009 , publisher=

  77. [77]

    arXiv preprint arXiv:2502.09198 , year=

    Understanding high-dimensional bayesian optimization , author=. arXiv preprint arXiv:2502.09198 , year=

  78. [78]

    arXiv preprint arXiv:2402.02746 , year=

    Standard gaussian process is all you need for high-dimensional bayesian optimization , author=. arXiv preprint arXiv:2402.02746 , year=

  79. [79]

    Advances in neural information processing systems , volume=

    Scalable Thompson sampling using sparse Gaussian process models , author=. Advances in neural information processing systems , volume=

  80. [80]

    , author=

    Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. , author=. Journal of Machine Learning Research , volume=

Showing first 80 references.