pith. machine review for the scientific record. sign in

arxiv: cs/0408007 · v1 · submitted 2004-08-02 · 💻 cs.LG · cs.CC

Recognition: unknown

Online convex optimization in the bandit setting: gradient descent without a gradient

Authors on Pith no claims yet
classification 💻 cs.LG cs.CC
keywords gradientpointfunctionsettingchosenconvexonlinesingle
0
0 comments X
read the original abstract

We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Select-then-differentiate: Solving Bilevel Optimization with Manifold Lower-level Solution Sets

    math.OC 2026-05 unverdicted novelty 7.0

    Optimistic bilevel optimization with manifold lower-level minimizers is differentiable if the optimistic selection is unique, yielding a pseudoinverse hyper-gradient and a convergent HG-MS algorithm whose rate depends...

  2. NePPO: Near-Potential Policy Optimization for General-Sum Multi-Agent Reinforcement Learning

    cs.LG 2026-03 unverdicted novelty 7.0

    NePPO learns a player-independent potential function via a novel objective whose minimization yields an approximate Nash equilibrium for general-sum multi-agent games.

  3. Ensemble Distributionally Robust Bayesian Optimisation

    cs.LG 2026-05 unverdicted novelty 6.0

    A tractable ensemble distributionally robust Bayesian optimization method achieves improved sublinear regret bounds under context uncertainty.

  4. OGPO: Sample Efficient Full-Finetuning of Generative Control Policies

    cs.LG 2026-05 unverdicted novelty 6.0

    OGPO is a sample-efficient off-policy method for full finetuning of generative control policies that reaches SOTA on robotic manipulation tasks and can recover from poor behavior-cloning initializations without expert data.