pith. machine review for the scientific record. sign in
theorem proved tactic proof high

core_stretching_absorbed

show as:
view Lean formalization →

Stretching contributions are absorbed into viscous dissipation for any discrete incompressible flow encoded by a CoreNSOperator. Workers on lattice-based fluid models or on the Recognition Science derivation of Navier-Stokes cite the result to establish monotonicity of the J-cost. The short tactic proof reduces the stretching bound to the pair budget via per-event inequalities and then applies transitivity with the absorbed budget lemma.

claimLet $op$ be a CoreNSOperator on a finite lattice with $N$ sites. Then the total stretching sum over contributions satisfies totalStretching$(op.contributions) ≤ -$totalViscous$(op.contributions)$.

background

The module constructs a concrete discrete incompressible Navier-Stokes operator on a finite three-direction lattice. CoreNSOperator supplies only the physical data: lattice topology, grid spacing $h>0$, viscosity $ν>0$, divergence-free velocity field, and transport flux; pair budgets and absorption are derived downstream from velocity gradients and Laplacians. The local setting replaces six previously free fields (pair amplitude, stretch factor, positivity, stretching bound, and absorption) by a DerivedEstimates layer built directly on the core data.

proof idea

The proof first obtains an intermediate bound totalStretching ≤ coreOperatorPairBudget by unfolding the total and indexed budget definitions, then applying Finset.sum_le_sum with the per-index core_stretching_le_pair_budget lemma at each corePairEvent. It concludes by invoking le_trans on this bound and the upstream core_pair_budget_absorbed theorem.

why it matters in Recognition Science

This supplies the absorption step required by the downstream core_dJdt_nonpos theorem, which proves dJdt ≤ 0 for the core operator via dJdt_nonpos_of_transport_cancel_and_absorption combined with core_transport_zero. It closes the viscous-absorption link in the discrete Navier-Stokes construction inside Recognition Science, feeding the J-cost monotonicity that follows from the forcing chain and Recognition Composition Law.

scope and limits

Lean usage

  exact core_stretching_absorbed op

formal statement (Lean)

 195theorem core_stretching_absorbed {siteCount : ℕ}
 196    (op : CoreNSOperator siteCount) :
 197    totalStretching op.contributions ≤ - totalViscous op.contributions := by

proof body

Tactic-mode proof.

 198  have hle : totalStretching op.contributions ≤ coreOperatorPairBudget op := by
 199    unfold totalStretching total coreOperatorPairBudget indexedPairBudget
 200    exact Finset.sum_le_sum (fun i _ =>
 201      by simpa [corePairEvent, pairEventBudget] using core_stretching_le_pair_budget op i)
 202  exact le_trans hle (core_pair_budget_absorbed op)
 203
 204/-- J-cost monotonicity from the core operator alone. -/

used by (1)

From the project-wide theorem graph. These declarations reference this one in their body.

depends on (15)

Lean names referenced from this declaration's body.