pith. machine review for the scientific record. sign in

arxiv: 1809.06304 · v1 · submitted 2018-09-17 · 📊 stat.ML · cs.IT· cs.LG· math.IT

Recognition: unknown

Approximate message-passing for convex optimization with non-separable penalties

Authors on Pith no claims yet
classification 📊 stat.ML cs.ITcs.LGmath.IT
keywords approximateconvexmessage-passingoptimizationpenaltieslinearnon-separablescheme
0
0 comments X
read the original abstract

We introduce an iterative optimization scheme for convex objectives consisting of a linear loss and a non-separable penalty, based on the expectation-consistent approximation and the vector approximate message-passing (VAMP) algorithm. Specifically, the penalties we approach are convex on a linear transformation of the variable to be determined, a notable example being total variation (TV). We describe the connection between message-passing algorithms -- typically used for approximate inference -- and proximal methods for optimization, and show that our scheme is, as VAMP, similar in nature to the Peaceman-Rachford splitting, with the important difference that stepsizes are set adaptively. Finally, we benchmark the performance of our VAMP-like iteration in problems where TV penalties are useful, namely classification in task fMRI and reconstruction in tomography, and show faster convergence than that of state-of-the-art approaches such as FISTA and ADMM in most settings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.