pith. machine review for the scientific record. sign in

arxiv: 2503.15093 · v4 · submitted 2025-03-19 · 🧮 math.OC · cs.SY· eess.SY

Recognition: unknown

Proximal Gradient Dynamics and Feedback Control for Equality-Constrained Composite Optimization

Authors on Pith no claims yet
classification 🧮 math.OC cs.SYeess.SY
keywords constraintsdynamicsproblemscompositecontrolconvergenceequality-constrainedequilibrium
0
0 comments X
read the original abstract

This paper studies equality-constrained composite minimization problems. This class of problems, capturing regularization terms and inequality constraints, naturally arises in a wide range of engineering and machine learning applications. To tackle these optimization problems, inspired by recent results, we introduce the \emph{proportional--integral proximal gradient dynamics} (PI--PGD): a closed-loop system where the Lagrange multipliers are control inputs and states are the problem decision variables. First, we establish the equivalence between the stationary points of the minimization problem and the equilibria of the PI--PGD. Then for the case of affine constraints, by leveraging tools from contraction theory we give a comprehensive convergence analysis for the dynamics, showing linear--exponential convergence towards the equilibrium. That is, the distance between each solution and the equilibrium is upper bounded by a function that first decreases linearly and then exponentially. Our findings are illustrated numerically on a set of representative examples, which include an exploratory application to nonlinear equality constraints.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A Unified Control-Theoretic Framework for Saddle-Point Dynamics in Constrained Optimization

    math.OC 2026-04 unverdicted novelty 7.0

    A PID feedback law on dual variables induces a unified family of saddle-point flows for constrained optimization, with explicit global exponential convergence guarantees under convexity and affine constraints.