SBL algorithms are unified under majorization-minimization with new convergence results, and a dimension-invariant neural network learns superior data-driven update rules that generalize across matrices and parameters.
From bayesian sparsity to gated recurrent nets
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
eess.SP 1years
2026 1verdicts
CONDITIONAL 1representative citing papers
citing papers explorer
-
Sparse Bayesian Learning Algorithms Revisited: From Learning Majorizers to Structured Algorithmic Learning using Neural Networks
SBL algorithms are unified under majorization-minimization with new convergence results, and a dimension-invariant neural network learns superior data-driven update rules that generalize across matrices and parameters.