pith. machine review for the scientific record. sign in
theorem proved term proof high

normalized_eq_one_at_max

show as:
view Lean formalization →

The theorem establishes that the normalized sequence ã(n) equals 1 precisely when a(n) attains the running maximum of the sequence up to n, assuming a(n) > 0. Navier-Stokes regularity researchers cite this when constructing scale-invariant limits from sequences with diverging sup-norms. The proof is a direct term-mode reduction that unfolds the normalization definition, rewrites via the maximum hypothesis, and invokes the positivity of the running maximum to cancel the denominator.

claimLet $a : ℕ → ℝ$ be a real sequence and $n ∈ ℕ$. If $a_n$ equals the running maximum of $a$ up to index $n$ and $a_n > 0$, then the normalized value $ã_n = a_n / M̃_n$ equals 1, where $M̃_n = max_{k ≤ n} a_k$.

background

The module formalizes running-max normalization for the Navier-Stokes regularity argument (NS Paper §3, Step 1). Given a sequence of times with diverging sup-norms $M_n → ∞$, the running maximum is defined as $M̃(n) = max_{k ≤ n} a(k)$ via Finset.sup' over the initial segment. The normalized sequence is then $ã(n) = a(n) / M̃(n)$, which satisfies $|ã(n)| ≤ 1$ for all $n$ by construction of the maximum. The upstream lemma runningMax_pos states that if $0 < a(n)$ then $0 < M̃(n)$, ensuring the denominator is positive and the normalization is well-defined.

proof idea

The term proof unfolds the definition of normalized to obtain the quotient $a(n) / runningMax a n$, rewrites the numerator via the hypothesis $a n = runningMax a n$, and applies div_self to the resulting expression. The non-zero condition follows immediately from runningMax_pos applied to the positivity hypothesis on $a(n)$.

why it matters in Recognition Science

This result closes the normalization step in the running-max construction used to extract ancient solutions from hypothetical blow-up sequences in the Navier-Stokes regularity proof. It guarantees that the rescaled field attains L^∞ norm exactly 1 at the selected times, enabling the subsequent rescaling $x ↦ x/λ_n$, $t ↦ t/λ_n²$ with λ_n = 1/√M̃(n). The lemma sits inside the pure real-analysis layer that precedes any appeal to the Recognition Science forcing chain or phi-ladder constants.

scope and limits

formal statement (Lean)

  80theorem normalized_eq_one_at_max (a : ℕ → ℝ) (n : ℕ)
  81    (hmax : a n = runningMax a n) (hpos : 0 < a n) :
  82    normalized a n = 1 := by

proof body

Term-mode proof.

  83  unfold normalized
  84  rw [hmax]
  85  exact div_self (ne_of_gt (runningMax_pos a n hpos))
  86
  87/-! ## Rescaled Coordinates -/
  88
  89/-- The rescaling factor λₙ = 1 / √(runningMax a n).
  90    Used to rescale space: x ↦ x/λₙ, t ↦ t/λₙ². -/

depends on (3)

Lean names referenced from this declaration's body.