jcost_gradient_descent_converges
Gradient descent on the J-cost function J(x) = (x + x^{-1})/2 - 1 moves any positive x ≠ 1 strictly closer to the unique minimum at x = 1 when the step size η is positive. Complexity theorists analyzing the tractability of physical ground-state search would invoke this result to establish that local J-cost minimization is efficient. The proof proceeds by case analysis on whether x exceeds or falls below 1, invokes the sign of the derivative from adjacent lemmas, and closes with linear arithmetic.
claimLet $J(x) = (x + x^{-1})/2 - 1$ for $x > 0$. For any $x > 0$ with $x ≠ 1$ and any step size η > 0, the updated value satisfies $x - η J'(x) < x$ whenever $x > 1$, and $x - η J'(x) > x$ whenever $x < 1$.
background
The module Information.PhysicsComplexityStructure frames the computational complexity of physics via J-cost minimization. J-cost is the cost function J(x) = (x + x^{-1})/2 - 1 induced by multiplicative recognizers (from MultiplicativeRecognizerL4.cost) and recognition events (from ObserverForcing.cost). The module states that J-cost is strictly convex with unique global minimum at x = 1, verifiable in O(1) time, and that gradient descent converges monotonically to this ground state.
proof idea
The term-mode proof constructs the conjunction of the two implications. For the x > 1 branch it applies jcost_deriv_pos_of_gt_one to obtain a positive derivative and closes with linarith using positivity of η. For the x < 1 branch it applies jcost_deriv_neg_of_lt_one, forms the negative product via mul_neg_of_pos_of_neg, and again closes with linarith.
why it matters in Recognition Science
Labeled IC-005.16, the theorem supports the module's complexity summary that lists J-cost gradient descent as converging monotonically and places ground-state verification in P. It rests on J-uniqueness from the forcing chain (T5) and the Recognition Composition Law. The result feeds the claim that local 8-tick dynamics remain O(1) per step while φ-rung computations remain EXPTIME.
scope and limits
- Does not establish convergence rates or optimal step sizes.
- Does not extend to higher-dimensional or multi-variable J-cost landscapes.
- Does not address cases where x = 1 or η ≤ 0.
- Does not prove that repeated steps reach exactly 1 in finite time.
formal statement (Lean)
195theorem jcost_gradient_descent_converges (x : ℝ) (hx_pos : x > 0) (hx_ne : x ≠ 1)
196 (η : ℝ) (hη_pos : η > 0) :
197 (x > 1 → x - η * jcost_deriv x < x) ∧
198 (x < 1 → x - η * jcost_deriv x > x) := by
proof body
Term-mode proof.
199 constructor
200 · intro h
201 have hd : jcost_deriv x > 0 := jcost_deriv_pos_of_gt_one x h
202 linarith [mul_pos hη_pos hd]
203 · intro h
204 have hd : jcost_deriv x < 0 := jcost_deriv_neg_of_lt_one x hx_pos h
205 have : η * jcost_deriv x < 0 := mul_neg_of_pos_of_neg hη_pos hd
206 linarith
207
208/-- **THEOREM IC-005.17**: The J-cost squared form bounds from below.
209 J(x) ≥ 0 with equality only at x = 1. This is the "complexity gap" between
210 the ground state and any imbalanced state. -/