pith. machine review for the scientific record. sign in

arxiv: 1610.05708 · v3 · submitted 2016-10-18 · 🧮 math.OC

Recognition: unknown

Relatively-Smooth Convex Optimization by First-Order Methods, and Applications

Authors on Pith no claims yet
classification 🧮 math.OC
keywords convexfunctionsmoothcdotfirst-orderrelativealgorithmsapproach
0
0 comments X
read the original abstract

The usual approach to developing and analyzing first-order methods for smooth convex optimization assumes that the gradient of the objective function is uniformly smooth with some Lipschitz constant $L$. However, in many settings the differentiable convex function $f(\cdot)$ is not uniformly smooth -- for example in $D$-optimal design where $f(x):=-\ln \det(HXH^T)$, or even the univariate setting with $f(x) := -\ln(x) + x^2$. Herein we develop a notion of "relative smoothness" and relative strong convexity that is determined relative to a user-specified "reference function" $h(\cdot)$ (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth with respect to a correspondingly fairly-simple reference function $h(\cdot)$. We extend two standard algorithms -- the primal gradient scheme and the dual averaging scheme -- to our new setting, with associated computational guarantees. We apply our new approach to develop a new first-order method for the $D$-optimal design problem, with associated computational complexity analysis. Some of our results have a certain overlap with the recent work \cite{bbt}.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A Single Deep Preference-Conditioned Policy for Learning Pareto Coverage Sets

    cs.LG 2026-05 unverdicted novelty 6.0

    A single preference-conditioned policy achieves unique and Lipschitz-continuous Pareto coverage in multi-objective MDPs via a new mirror-descent policy iteration algorithm with O(1/k) convergence.