pith. machine review for the scientific record. sign in
lemma proved tactic proof high

isCalibratedLimit_of_isCalibrated

show as:
view Lean formalization →

The lemma shows that any normalized cost function F whose associated H satisfies a pointwise second-derivative calibration at zero also satisfies the equivalent limit-form calibration. Workers on the T5 J-uniqueness argument cite it to equate the differential and asymptotic characterizations of the cost. The short tactic proof extracts F(1)=0, equates the second derivatives of H and G by the constant shift, substitutes the calibration hypothesis, and directly invokes the definition of HasLogCurvature.

claimLet $F:ℝ→ℝ$ satisfy $F(1)=0$. Assume $H_F$ is twice continuously differentiable with $(H_F)'(0)=0$ and that the logarithmic curvature of $H_F$ equals its second derivative at zero. If $F$ is calibrated, i.e., the second derivative of $G_F(t)=F(e^t)$ at zero equals 1, then the limit as $t→0$ of $2(H_F(t)-1)/t^2$ equals 1.

background

The module supplies helper lemmas for the T5 cost-uniqueness proof. Key local definitions are: IsNormalized asserts $F(1)=0$; $G_F(t)=F(e^t)$ is the log-coordinate reparametrization; $H_F(t)=G_F(t)+1$ is the shifted cost; IsCalibrated asserts $G_F''(0)=1$; HasLogCurvature$(H,κ)$ asserts that $2(H(t)-1)/t^2$ tends to $κ$ as $t→0$; and IsCalibratedLimit asserts HasLogCurvature$(H_F,1)$ (the limit form of calibration). Upstream, the CostAlgebra.H definition supplies the shifted cost $H(x)=J(x)+1$ that converts the Recognition Composition Law into the d'Alembert equation $H(xy)+H(x/y)=2H(x)H(y)$.

proof idea

The tactic proof first obtains $F(1)=0$ from IsNormalized and deduces $H_F(0)=1$. It then shows that the second derivative of $H_F$ at zero equals that of $G_F$ at zero by the add-constant rule for derivatives. Substituting the IsCalibrated hypothesis yields the second derivative equal to 1. The final simpa step replaces the curvature parameter in HasLogCurvature by this value and rewrites the conclusion as IsCalibratedLimit.

why it matters in Recognition Science

The lemma closes the gap between the differential calibration condition (used in derivative-based arguments) and the limit-form calibration required by the T5 uniqueness chain. It sits inside the Functional Equation Helpers for T5 and feeds the larger J-uniqueness argument that forces the cosh form of the cost. No downstream uses are recorded yet; the result is a direct bridge between the two calibration predicates appearing in the Recognition Science cost axioms.

scope and limits

formal statement (Lean)

 656lemma isCalibratedLimit_of_isCalibrated
 657  (F : ℝ → ℝ) (hNorm : IsNormalized F)
 658  (_h_diff : ContDiff ℝ 2 (H F)) (h_deriv0 : deriv (H F) 0 = 0)
 659  (h_log : HasLogCurvature (H F) (deriv (deriv (H F)) 0))
 660  (h_calib : IsCalibrated F) :
 661  IsCalibratedLimit F := by

proof body

Tactic-mode proof.

 662  have hNorm' : F 1 = 0 := by simpa [IsNormalized] using hNorm
 663  have h_H0 : H F 0 = 1 := by simp [H, G, hNorm']
 664  have h_d2 : deriv (deriv (H F)) 0 = 1 := by
 665    -- H = G + 1, so second derivatives at 0 agree
 666    have hderiv : deriv (H F) = deriv (G F) := by
 667      funext t
 668      change deriv (fun y => G F y + 1) t = deriv (G F) t
 669      simpa using (deriv_add_const (f := G F) (x := t) (c := (1 : ℝ)))
 670    have hderiv2 : deriv (deriv (H F)) = deriv (deriv (G F)) := congrArg deriv hderiv
 671    have hderiv2_at0 : deriv (deriv (H F)) 0 = deriv (deriv (G F)) 0 :=
 672      congrArg (fun g => g 0) hderiv2
 673    simpa [IsCalibrated] using hderiv2_at0.trans h_calib
 674  simpa [IsCalibratedLimit, h_d2] using h_log
 675

depends on (14)

Lean names referenced from this declaration's body.