pith. machine review for the scientific record. sign in

arxiv: 2604.09135 · v1 · submitted 2026-04-10 · 📊 stat.ML · cs.LG· math.ST· stat.ME· stat.TH

Recognition: unknown

Identifying Causal Effects Using a Single Proxy Variable

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:17 UTC · model grok-4.3

classification 📊 stat.ML cs.LGmath.STstat.MEstat.TH
keywords causal inferenceproxy variableunobserved confoundingidentifiabilitySPICEcausal effect estimationneural network
0
0 comments X

The pith

A single proxy variable with a known generation mechanism from the unobserved confounder allows identification of causal effects under the SPICE completeness assumption.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that if a single proxy for an unobserved confounder is available and the mechanism generating the proxy from the confounder is known, then under a completeness condition on that mechanism the causal effect of a treatment on an outcome can be recovered from observed data. This matters because many real-world settings have hidden confounders that prevent standard causal estimation, yet a measurable proxy is often present. The proof covers multi-dimensional proxies, general functional relationships, and wide classes of distributions, going beyond earlier results that were limited to simpler cases. The authors also supply a neural network procedure, SPICE-Net, that turns the identifiability result into practical estimates for discrete or continuous treatments.

Core claim

Under the SPICE completeness assumption on the mechanism that generates a single, possibly multi-dimensional proxy variable from the unobserved confounder, the causal effect of a treatment on an outcome is identifiable. The result extends proxy-variable identifiability to higher dimensions, more flexible functional forms, and broader distributions. SPICE-Net supplies a neural-network estimator that works for both discrete and continuous treatments.

What carries the argument

The SPICE completeness assumption on the known proxy-generation mechanism, which guarantees that the observed proxy supplies enough information about the confounder to recover the causal effect.

If this is right

  • Causal effects of treatment on outcome become identifiable without observing the confounder itself.
  • The result holds for proxies of any finite dimension and for general functional relationships between variables.
  • SPICE-Net provides a practical estimator applicable to both discrete and continuous treatments.
  • The approach broadens the settings in which proxy-variable methods can be used for causal inference.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If auxiliary data can be used to learn or approximate the proxy-generation mechanism, the method could be applied in domains where the mechanism is not known exactly in advance.
  • The completeness condition may be checkable in practice by verifying whether the proxy distinguishes different levels of the confounder sufficiently.
  • The identifiability result could be combined with other partial-observation techniques to handle multiple proxies or additional measurement error.

Load-bearing premise

That the known mechanism generating the proxy from the confounder satisfies the SPICE completeness condition so that the causal effect is uniquely recoverable.

What would settle it

A concrete data-generating process in which the proxy is produced from the confounder according to a known mechanism that meets SPICE, yet two different causal effects produce identical distributions over the observed variables.

Figures

Figures reproduced from arXiv: 2604.09135 by Niklas Pfister, Sebastian Weichwald, Silvan Vollmer.

Figure 1
Figure 1. Figure 1: Structural causal model M and directed acyclic graph G of Definition 1. Moreover, it induces a unique, well-defined probability distribution on the variables (U, W, X, Y ) that we denote by P M (for example Bongers et al., 2021) and for each in￾tervention do(X = x) that sets X to a fixed value x ∈ X , an interventional distribution P M;do(X:=x) Y on Y . Furthermore, the implied joint distribution P M is ab… view at source ↗
Figure 2
Figure 2. Figure 2: The first step of SPICE-Net is a neural network that builds on the Engression framework by Shen and Meinshausen (2025). It has weights γ and it takes the treatment and the outcome as inputs and appends independent standard Gaussian noise nodes ε1, . . . , εl−2 to the first l − 2 layers. We add samples from the distribution of E in the second-to-last layer and denote the output by w. It recovers the conditi… view at source ↗
Figure 3
Figure 3. Figure 3: Mean squared error (MSE) of causal function estimators described in Section 5.1 and Appendix J for 2000 training samples from data sets A-D of [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Mean squared error (MSE) in ten thousands of causal function estimators compared to the ground-truth method Adj.-U on data from the Light Tunnel Mk2 from the Causal Chamber® (Gamella et al., 2025) from experiments I and II of [PITH_FULL_IMAGE:figures/full_fig_p015_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Extensions of Setting 2 with an additional observed confounder O ∈ O ⊆ R l (left), unobserved mediation (middle) and noisy treatment and outcome (right). as a function of the proxy. It suffices to observe a variable that satisfies the conditional independence in Equation 1, and it need not be a causal descendant of the confounder. Lastly, we consider a case where all variables are measured with error as on… view at source ↗
Figure 6
Figure 6. Figure 6: Mean squared error (MSE) of causal function estimators described in Section 5.1 and Appendix J for 5000 training samples from data sets A-D of [PITH_FULL_IMAGE:figures/full_fig_p042_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: The Light Tunnel Mk2 from Causal Chamber® (A and B) with its ground-truth graph in C. The variable types are control inputs I, sensor parameters P and sensor measurements M. In the light tunnel, we consider green as the confounder, which is the brightness setting of the green LEDs on the main light source, ir 1 as the treatment, which is an infrared intensity measurement produced by the first light sensor,… view at source ↗
read the original abstract

Unobserved confounding is a key challenge when estimating causal effects from a treatment on an outcome in scientific applications. In this work, we assume that we observe a single, potentially multi-dimensional proxy variable of the unobserved confounder and that we know the mechanism that generates the proxy from the confounder. Under a completeness assumption on this mechanism, which we call Single Proxy Identifiability of Causal Effects or simply SPICE, we prove that causal effects are identifiable. We extend the proxy-based causal identifiability results by Kuroki and Pearl (2014); Pearl (2010) to higher dimensions, more flexible functional relationships and a broader class of distributions. Further, we develop a neural network based estimation framework, SPICE-Net, to estimate causal effects, which is applicable to both discrete and continuous treatments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript claims that causal effects of a treatment on an outcome are identifiable from observational data when a single (possibly multi-dimensional) proxy for the unobserved confounder is observed, the proxy-generation mechanism is known, and the mechanism satisfies the SPICE completeness condition. The authors extend the Kuroki-Pearl proxy-variable results to higher-dimensional and non-linear settings, prove identifiability under SPICE, and introduce the SPICE-Net neural estimator that applies to both discrete and continuous treatments.

Significance. If the SPICE-based identifiability result is correct, the work meaningfully relaxes the data requirements for proxy-variable causal inference, which is relevant for applications where only one proxy is feasible. The neural estimation procedure could provide a practical route to estimation once identifiability is secured.

major comments (2)
  1. [§3] §3 (Identifiability theorem): the proof that SPICE plus a known proxy mechanism yields point identification of the causal effect must be checked for completeness; the abstract states the result but the derivation steps, including how completeness rules out non-identifiable distributions, are not visible in the provided excerpt and require explicit verification.
  2. [§4] §4 (SPICE-Net estimator): consistency of the neural estimator is asserted under the identifiability conditions, yet no convergence rate, finite-sample error bound, or simulation study isolating the effect of the completeness assumption is referenced; this is load-bearing for the claim that the method is applicable in practice.
minor comments (2)
  1. [Abstract] Abstract: the statement that the proxy-generation mechanism is 'known' should be clarified—whether it is treated as given or must be estimated from auxiliary data.
  2. [§2] Notation: define the completeness condition SPICE formally (e.g., as an injectivity or density condition on the conditional distribution) at its first appearance rather than deferring the definition.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their careful and constructive review. We address each major comment below, clarifying the identifiability result and outlining revisions to improve the presentation of the estimator.

read point-by-point responses
  1. Referee: [§3] §3 (Identifiability theorem): the proof that SPICE plus a known proxy mechanism yields point identification of the causal effect must be checked for completeness; the abstract states the result but the derivation steps, including how completeness rules out non-identifiable distributions, are not visible in the provided excerpt and require explicit verification.

    Authors: We appreciate the referee drawing attention to the need for transparent verification. The full proof appears in Section 3, where we establish that the SPICE completeness condition renders the integral operator mapping confounder distributions to the observed proxy distribution injective. This injectivity directly implies that distinct confounder distributions produce distinct observable laws, thereby uniquely identifying the causal effect as a functional of the confounder. To make the argument more self-contained, we will expand the section with an explicit step-by-step derivation and a new paragraph that isolates how completeness excludes non-identifiable distributions. revision: yes

  2. Referee: [§4] §4 (SPICE-Net estimator): consistency of the neural estimator is asserted under the identifiability conditions, yet no convergence rate, finite-sample error bound, or simulation study isolating the effect of the completeness assumption is referenced; this is load-bearing for the claim that the method is applicable in practice.

    Authors: We agree that stronger empirical and theoretical support for SPICE-Net would enhance the manuscript. Consistency follows from the identifiability theorem combined with standard neural approximation results, yet we do not supply explicit rates or finite-sample bounds. In revision we will add a simulation study that systematically varies the strength of the completeness condition while holding other factors fixed, thereby isolating its effect on estimation error. We will also include a brief discussion of the convergence properties implied by the existing theory. revision: partial

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper states a conditional identifiability result: given a known proxy-generation mechanism and the explicitly introduced completeness assumption SPICE, causal effects are identifiable. This is a standard theorem statement under stated assumptions, extending external prior work (Kuroki-Pearl 2014, Pearl 2010) without load-bearing self-citations, fitted-parameter predictions, or self-definitional reductions. The derivation chain does not collapse any claimed result to its own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The identifiability result rests on knowledge of the proxy generation mechanism and the SPICE completeness assumption; no free parameters or new postulated entities are introduced.

axioms (1)
  • domain assumption Completeness assumption on the proxy generation mechanism from the confounder (SPICE)
    This is the central condition invoked to prove identifiability from a single proxy.

pith-pipeline@v0.9.0 · 5441 in / 1261 out tokens · 49506 ms · 2026-05-10T17:17:52.554215+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

66 extracted references · 5 canonical work pages · 1 internal anchor

  1. [1]

    Linear Algebra Done Right

    Sheldon Axler. Linear Algebra Done Right. Undergraduate Texts in Mathematics. Springer, New York, New York, 3rd edition, 2015

  2. [2]

    Infinite divisibility of the hyperbolic and generalized inverse gaussian distributions

    Ole Barndorff-Nielsen and Christian Halgreen. Infinite divisibility of the hyperbolic and generalized inverse gaussian distributions. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 38: 0 309--311, 1977

  3. [3]

    Bianchi, Svetlozar T

    Michele L. Bianchi, Svetlozar T. Rachev, Young S. Kim, and Frank J. Fabozzi. Tempered infinitely divisible distributions and processes. Theory of Probability and its Applications, 55 0 (1): 0 2--26, 2011

  4. [4]

    Stephan Bongers, Patrick Forré, Jonas Peters, and Joris M. Mooij. Foundations of structural causal models with cycles and latent variables. The Annals of Statistics, 49 0 (5): 0 2885--2915, 2021

  5. [5]

    Jarrett E. K. Byrnes and Laura E. Dee. Causal inference with observational data and unobserved confounding variables. Ecology Letters, 28 0 (1): 0 e70023, 2025

  6. [6]

    David Card and Alan B. Krueger. Minimum wages and employment: A case study of the fast food industry in new jersey and pennsylvania. The American Economic Review, 84 0 (4): 0 772–793, 1994

  7. [7]

    Carroll, David Ruppert, Leonard A

    Raymond J. Carroll, David Ruppert, Leonard A. Stefanski, and Ciprian M. Crainiceanu. Measurement Error in Nonlinear Models: A Modern Perspective. Chapman and Hall/CRC, Boca Raton, Florida, 2nd edition, 2006

  8. [8]

    Controlling for latent confounding with triple proxies

    Ben Deaner. Controlling for latent confounding with triple proxies. arXiv preprint arXiv:2204.13815, 2023

  9. [9]

    Introductio in analysin infinitorum

    Leonhard Euler. Introductio in analysin infinitorum. M.M. Bousquet, Lausanne, Switzerland, 1748

  10. [10]

    Gerald B. Folland. Real Analysis: Modern Techniques and Their Applications. John Wiley & Sons, New York, New York, 2nd edition, 1999

  11. [11]

    Gamella, Jonas Peters, and Peter B\" u hlmann

    Juan L. Gamella, Jonas Peters, and Peter B\" u hlmann. Causal chambers as a real-world physical testbed for ai methodology. Nature Machine Intelligence, 7: 0 107–118, 2025

  12. [12]

    Differential privacy and the 2020 us census

    Simson Garfinkel. Differential privacy and the 2020 us census. MIT Case Studies in Social and Ethical Responsibilities of Computing, 2022

  13. [13]

    Gill and James M

    Richard D. Gill and James M. Robins. Causal inference for complex longitudinal data: The continuous case. The Annals of Statistics, 29 0 (6): 0 1785–1811, 2001

  14. [14]

    Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102 0 (477): 0 359--378, 2007

  15. [15]

    Sander Greenland and Timothy L. Lash. Bias analysis. In Sander Greenland Kenneth J. Rothman and Timothy L. Lash, editors, Modern Epidemiology, pages 345--380. Lippincott Williams & Wilkins, Philadelphia, Pennsylvania, 3rd edition, 2015

  16. [16]

    The student t-distribution of any degree of freedom is infinitely divisible

    Emil Grosswald. The student t-distribution of any degree of freedom is infinitely divisible. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 36: 0 103--109, 1976

  17. [17]

    Ogburn, and Ilya Shpitser

    Helen Guo, Elizabeth L. Ogburn, and Ilya Shpitser. Comparing two proxy methods for causal identification. arXiv preprint arXiv:2512.00175, 2025

  18. [18]

    Delving deep into rectifiers: Surpassing human-level performance on imagenet classification

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), page 1026–1034, Santiago, Chile, 2015

  19. [19]

    Horn and Fred W

    Roger A. Horn and Fred W. Steutel. On multivariate infinitely divisible distributions. Stochastic Processes and their Applications, 6 0 (2): 0 139--151, 1978

  20. [20]

    Schennach

    Yingyao Hu and Susanne M. Schennach. Instrumental variable treatment of nonclassical measurement error models. Econometrica, 76 0 (1): 0 195–216, 2008

  21. [21]

    Jackson, Jennifer C

    Lisa A. Jackson, Jennifer C. Nelson, Patti Benson, Kathleen M. Neuzil, Robert J. Reid, Bruce M. Psaty, Susan R. Heckbert, Eric B. Larson, and Noel S. Weiss. Functional status is a confounder of the association of influenza vaccine and risk of all cause mortality in seniors. International Journal of Epidemiology, 35 0 (2): 0 345–352, 2005

  22. [22]

    Alexander S. Kechris. Classical Descriptive Set Theory, volume 156 of Graduate Texts in Mathematics. Springer, New York, New York, 1995

  23. [23]

    Joseph B. Kruskal. Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra and its Applications, 18 0 (2): 0 95–138, 1977

  24. [24]

    Measurement bias and effect restoration in causal inference

    Manabu Kuroki and Judea Pearl. Measurement bias and effect restoration in causal inference. Biometrika, 101 0 (2): 0 423–437, 2014

  25. [25]

    Rosendaal, Suzanne C

    Saskia le Cessie, Jan Debeij, Frits R. Rosendaal, Suzanne C. Cannegieter, and Jan P. Vandenbroucke. Quantification of bias in direct effects estimates due to different types of measurement error in the mediator. Epidemiology, 23 0 (4): 0 551--560, 2012

  26. [26]

    A survey of deep causal models and their industrial applications

    Zongyu Li, Xiaobo Guo, and Siwei Qiang. A survey of deep causal models and their industrial applications. Artificial Intelligence Review, 57 0 (298), 2024

  27. [27]

    Detecting clinician implicit biases in diagnoses using proximal causal inference

    Kara Liu, Russ Altman, and Vasilis Syrgkanis. Detecting clinician implicit biases in diagnoses using proximal causal inference. In Russ B. Altman, Lawrence Hunter, Marylyn D. Ritchie, Tiffany Murray, and Teri E. Klein, editors, Biocomputing 2025, pages 330--345. World Scientific, Singapore, Singapore, 2025

  28. [28]

    Mooij, David Sontag, Richard Zemel, and Max Welling

    Christos Louizos, Uri Shalit, Joris M. Mooij, David Sontag, Richard Zemel, and Max Welling. Causal effect inference with deep latent-variable models. In Proceedings of the 31st Conference on Neural Information Processing Systems, pages 6449--6459, Long Beach, California, 2017

  29. [29]

    Observational studies: a review of study designs, challenges and strategies to reduce confounding

    Christine Y Lu. Observational studies: a review of study designs, challenges and strategies to reduce confounding. International Journal of Clinical Practice, 63 0 (5): 0 691--697, 2009

  30. [30]

    Characteristic Functions

    Eugene Lukacs. Characteristic Functions. Charles Griffin & Company Limited, London, United Kingdom, 2nd edition, 1970

  31. [31]

    Some incomplete but boundedly complete location families

    Lutz Mattner. Some incomplete but boundedly complete location families. The Annals of Statistics, 21 0 (4): 0 2158--2162, 1993

  32. [32]

    Tchetgen Tchetgen

    Wang Miao, Zhi Geng, and Eric J. Tchetgen Tchetgen. Identifying causal effects with proxy variables of an unmeasured confounder. Biometrika, 105 0 (4): 0 987–993, 2018

  33. [33]

    Kevin P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge, Massachusetts, 2012

  34. [34]

    Characteristic kernels and infinitely divisible distributions

    Yu Nishiyama and Kenji Fukumizu. Characteristic kernels and infinitely divisible distributions. Journal of Machine Learning Research, 17 0 (180): 0 1--28, 2016

  35. [35]

    Richardson, and Eric J

    Chan Park, David B. Richardson, and Eric J. Tchetgen Tchetgen. Single proxy control. Biometrics, 80 0 (2): 0 ujae027, 2024

  36. [36]

    Automatic differentiation in pytorch

    Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, California, 2017

  37. [37]

    Causality: Models, Reasoning and Inference

    Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, Cambridge, United Kingdom, 2nd edition, 2009

  38. [38]

    On measurement bias in causal inference

    Judea Pearl. On measurement bias in causal inference. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, pages 425--432, Santa Catalina Island, California, 2010

  39. [39]

    External validity: From do-calculus to transportability across populations

    Judea Pearl and Elias Bareinboim. External validity: From do-calculus to transportability across populations. Statistical Science, 29 0 (4): 0 579--595, 2014

  40. [40]

    Scikit-learn: Machine learning in python

    Fabian Pedregosa, Ga \"e l Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and \'E douard Duchesnay. Scikit-learn: Machine learning in python. Journal of Machine Learning ...

  41. [41]

    Matthew A. Petroff. Accessible color sequences for data visualization. arXiv preprint arXiv:2107.02270, 2024

  42. [42]

    Rachev, Young S

    Svetlozar T. Rachev, Young S. Kim, Michele L. Bianchi, and Frank J. Fabozzi. Financial Models with L \'e vy Processes and Volatility Clustering . Wiley & Sons, Hoboken, New Jersey, 2011

  43. [43]

    Ringlein, Trang Q

    Grace V. Ringlein, Trang Q. Nguyen, Peter P. Zandi, Elizabeth A. Stuart, and Harsh Parikh. Demystifying proximal causal inference. arXiv preprint arXiv:2512.24413, 2025

  44. [44]

    A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect

    James Robins. A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. Mathematical Modelling, 7 0 (9-12): 0 1393--1512, 1986

  45. [45]

    Rosenthal

    Jeffrey S. Rosenthal. A First Look at Rigorous Probability Theory. World Scientific, Singapore, Singapore, 2nd edition, 2006

  46. [46]

    Tempering stable processes

    Jan Rosiński. Tempering stable processes. Stochastic Processes and their Applications, 117 0 (6): 0 677--707, 2007

  47. [47]

    Royden and Patrick M

    Halsey L. Royden and Patrick M. Fitzpatrick. Real Analysis. Pearson, London, United Kingdom, 4th edition, 2010

  48. [48]

    Real and Complex Analysis

    Walter Rudin. Real and Complex Analysis. McGraw-Hill, Singapore, Singapore, 3rd edition, 1987

  49. [49]

    Functional Analysis

    Walter Rudin. Functional Analysis. McGraw-Hill, Singapore, Singapore, 2nd edition, 1991

  50. [50]

    L \'e vy Processes and Infinitely Divisible Distributions

    Ken-iti Sato. L \'e vy Processes and Infinitely Divisible Distributions . Cambridge University Press, Cambridge, United Kingdom, 1999

  51. [51]

    Engression: extrapolation through the lens of distributional regression

    Xinwei Shen and Nicolai Meinshausen. Engression: extrapolation through the lens of distributional regression. Journal of the Royal Statistical Society Series B: Statistical Methodology, 87 0 (3): 0 653--677, 2025

  52. [52]

    Steen and J

    Lynn A. Steen and J. Arthur Seebach Jr. Counterexamples in Topology. Holt, Rinehart and Winston, New York, New York, 1970

  53. [53]

    Gábor J. Székely. -statistics: The energy of statistical samples. Technical Report 02-16, Bowling Green State University, Bowling Green, Ohio, 2002

  54. [54]

    Tchetgen Tchetgen

    Eric J. Tchetgen Tchetgen. The control outcome calibration approach for causal inference with unobserved confounding. American Journal of Epidemiology, 179 0 (5): 0 633--640, 2014

  55. [55]

    Tchetgen Tchetgen, Andrew Ying, Yifan Cui, Xu Shi, and Wang Miao

    Eric J. Tchetgen Tchetgen, Andrew Ying, Yifan Cui, Xu Shi, and Wang Miao. An introduction to proximal causal inference. Statistical Science, 39 0 (3): 0 375--390, 2024

  56. [56]

    On the infinite divisibility of the lognormal distribution

    Olof Thorin. On the infinite divisibility of the lognormal distribution. Scandinavian Actuarial Journal, 1977 0 (3): 0 121--148, 1977

  57. [57]

    Census Bureau

    U.S. Census Bureau . Disclosure avoidance for the 2020 census: An introduction. Technical report, U.S. Government Publishing Office, Washington, D.C., 2021

  58. [58]

    Wouter A. C. van Amsterdam, Joost. J. C. Verhoeff, Netanja I. Harlianto, Gijs A. Bartholomeus, Aahlad M. Puli, Pim A. de Jong, Tim Leiner, Anne S. R. van Lindert, Marinus J. C. Eijkemans, and Rajesh Ranganath. Individual treatment effect estimation in the presence of unobserved confounding using proxies: a cohort study in stage III non-small cell lung can...

  59. [59]

    VanderWeele and Peng Ding

    Tyler J. VanderWeele and Peng Ding. Sensitivity analysis in observational research: Introducing the e-value. Annals of Internal Medicine, 167 0 (4): 0 268–274, 2017

  60. [60]

    VanderWeele, Kofi Asomaning, Eric J

    Tyler J. VanderWeele, Kofi Asomaning, Eric J. Tchetgen Tchetgen, Younghun Han, Margaret R. Spitz, Sanjay Shete, Xifeng Wu, Valerie Gaborieau, Ying Wang, John McLaughlin, Rayjean J. Hung, Paul Brennan, Christopher I. Amos, David C. Christiani, and Xihong Lin. Genetic variants on 15q25.1, smoking, and lung cancer: An assessment of mediation and interaction....

  61. [61]

    Tauberian theorems

    Norbert Wiener. Tauberian theorems. Annals of Mathematics, 33 0 (1): 0 1--100, 1932

  62. [62]

    Platt, Andrew R

    Richard Wyss, Chen Yanover, Tal El-Hay, Dimitri Bennett, Robert W. Platt, Andrew R. Zullo, Grammati Sari, Xuerong Wen, Yizhou Ye, Hongbo Yuan, Mugdha Gokhale, Elisabetta Patorno, and Kueiyu J. Lin. Machine learning for improving high-dimensional proxy confounder adjustment in healthcare database studies: An overview of the current literature. Pharmacoepid...

  63. [63]

    Kernel single proxy control for deterministic confounding

    Liyuan Xu and Arthur Gretton. Kernel single proxy control for deterministic confounding. In Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, pages 3736--3744, Mai Khao, Thailand, 2025

  64. [64]

    A survey on causal inference

    Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, and Aidong Zhang. A survey on causal inference. ACM Transactions on Knowledge Discovery from Data (TKDD), 15 0 (5): 0 1--46, 2021

  65. [65]

    Nonparametric inference on dose-response curves without the positivity condition

    Yikun Zhang, Yen-Chi Chen, and Alexander Giessing. Nonparametric inference on dose-response curves without the positivity condition. arXiv preprint arXiv:2405.09003, 2025

  66. [66]

    Partial identification with proxy of latent confoundings via sum-of-ratios fractional programming

    Zhiheng Zhang and Xinyan Su. Partial identification with proxy of latent confoundings via sum-of-ratios fractional programming. In Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, pages 4140--4172, Barcelona, Spain, 2024