PRADAS derives a Bayes-optimal mirror statistic for any splitting scheme, establishes asymptotic FDR control under weak dependence, and optimizes the split ratio as a stopping time to improve power over standard equal-split methods.
On asymptotically optimal confidence regions and tests for high-dimensional models , volume=
2 Pith papers cite this work. Polarity classification is still indexing.
years
2026 2verdicts
UNVERDICTED 2representative citing papers
Simulations show Ridge, Lasso, and ElasticNet perform similarly for prediction at high sample-to-feature ratios, but Lasso feature selection recall drops to 0.18 under high multicollinearity and low SNR while ElasticNet holds at 0.93.
citing papers explorer
-
PRADAS: PRior-Assisted DAta Splitting for False Discovery Rate Control
PRADAS derives a Bayes-optimal mirror statistic for any splitting scheme, establishes asymptotic FDR control under weak dependence, and optimizes the split ratio as a stopping time to improve power over standard equal-split methods.
-
Choosing the Right Regularizer for Applied ML: Simulation Benchmarks of Popular Scikit-learn Regularization Frameworks
Simulations show Ridge, Lasso, and ElasticNet perform similarly for prediction at high sample-to-feature ratios, but Lasso feature selection recall drops to 0.18 under high multicollinearity and low SNR while ElasticNet holds at 0.93.