pith. machine review for the scientific record. sign in

arxiv: 2410.14375 · v3 · submitted 2024-10-18 · 💻 cs.LG · cs.CL

Recognition: unknown

Causal Fine-Tuning under Latent Confounded Shift

Authors on Pith no claims yet
classification 💻 cs.LG cs.CL
keywords causalconfoundedfine-tuninglatentshiftspuriousduringmodel
0
0 comments X
read the original abstract

Adapting to latent confounded shift remains a core challenge in modern AI. This setting is driven by hidden variables that induce spurious correlations between inputs and outputs during training, leading models to rely on non-causal shortcuts. For example, a model may learn to treat metadata (e.g., data source like "Amazon") as a proxy for positive sentiment, causing failure when the source becomes predominantly negative during deployment. To address this latent confounded shift, we introduce Causal Fine-Tuning(CFT). Using a structural causal model as an inductive bias, we derive sufficient identification conditions that motivate a fine-tuning objective for decomposing representations into high-level stable and low-level shift-sensitive components. Instantiating this framework in BERT, we show that learning such causal/spurious representations and adjusting them accordingly yield a more robust predictor. Experiments on spurious correlation injection attacks in text demonstrate that our method outperforms black-box domain generalization baselines, highlighting the benefits of explicitly modeling causal structure.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.