remJ_unit
remJ_unit shows that the remainder after linearizing J-cost at base 1 equals exactly Jcost of e^L. Researchers deriving Taylor expansions or harm linearizations in the Recognition Science cost module would cite this identity when simplifying around the unit point. The proof is a direct term-mode reduction that unfolds remJ, linJ and Jcost then simplifies algebraically.
claimFor every real $L$, the remainder after linearization of the J-cost function at unit base satisfies $remJ(1, L) = J_{cost}(e^L)$.
background
The J-cost is the function $J_{cost}(x) = (x + x^{-1})/2 - 1$ for $x > 0$. Its linearization under log-strain $L$ is $linJ(x, L) = ((x - x^{-1})/2) L$, so the remainder is defined by $remJ(x, L) = J_{cost}(x e^L) - J_{cost}(x) - linJ(x, L)$. At $x = 1$ both $J_{cost}(1)$ and $linJ(1, L)$ vanish, leaving the pure $J_{cost}(e^L)$ term.
proof idea
The proof unfolds the definitions of remJ, linJ and Jcost, then applies simp to reduce both sides to $J_{cost}(exp L)$. It is a one-line term proof that relies on the algebraic cancellation built into those definitions.
why it matters in Recognition Science
This base-case identity supports the derivative theory in the J-cost module, which replaces axioms in Ethics/Harm by showing that linearization errors are quadratic. It supplies the exact starting point for remainder bounds and connects to the Recognition Composition Law through the J-function's functional equation. No downstream uses are recorded yet, leaving it as a local lemma for expansion arguments.
scope and limits
- Does not bound the remainder for nonzero L or derive the quadratic estimate.
- Does not extend the identity to bases other than 1.
- Does not reference physical constants, dimensions, or forcing-chain steps.
formal statement (Lean)
113lemma remJ_unit (L : ℝ) : remJ 1 L = Jcost (exp L) := by
proof body
Term-mode proof.
114 unfold remJ linJ Jcost
115 simp
116
117/-- J(e^L) ≥ 0 for all L (AM-GM). -/