pith. machine review for the scientific record. sign in
theorem proved term proof high

learning_rate_nonneg

show as:
view Lean formalization →

Physicists modeling memory retention as a thermodynamic system cite the non-negativity of the learning rate to guarantee that attention-weighted increments remain consistent with free-energy decay. The result holds for every LearningEvent whose attention lies in [0,1]. Its proof is a short term-mode reduction that unfolds the definition and applies mul_nonneg twice, using the already-established non-negativity of the spaced bonus.

claimFor every LearningEvent $e$ with repetition count $r$, attention weight $a$ satisfying $0 ≤ a ≤ 1$, and spacing $s$, one has $0 ≤ ϕ^{-r} · a · (1 + b(e))$, where $b(e)$ is the spaced bonus of $e$.

background

The MemoryLedger module treats memory traces as a thermodynamic system in which retention competes with free-energy decay under Recognition Science rules. A LearningEvent packages a LedgerMemoryTrace with an attention scalar bounded in [0,1], a repetition count, and a spacing interval. The learning_rate is defined as ϕ to the power of negative repetitions times attention times one plus the spaced bonus. This non-negativity result depends on the sibling theorem spaced_bonus_nonneg and the arithmetic closure of non-negative reals under multiplication.

proof idea

The proof unfolds learning_rate then applies mul_nonneg. The first factor is shown non-negative by mul_nonneg of (rpow_pos_of_pos phi_pos).le and the lower bound from attention_bounded. The second factor follows by linarith applied to spaced_bonus_nonneg.

why it matters in Recognition Science

The theorem is listed among the fully proven results in the module documentation and is referenced by memory_ledger_status to certify completion of the thermodynamic memory suite. It supports consistency of learning increments with the Recognition Composition Law and the phi-ladder structure in the forcing chain (T5–T6). No open scaffolding remains for this claim.

scope and limits

formal statement (Lean)

 531theorem learning_rate_nonneg (event : LearningEvent) : 0 ≤ learning_rate event := by

proof body

Term-mode proof.

 532  unfold learning_rate
 533  apply mul_nonneg
 534  · apply mul_nonneg (rpow_pos_of_pos phi_pos _).le event.attention_bounded.1
 535  · linarith [spaced_bonus_nonneg event]
 536

used by (1)

From the project-wide theorem graph. These declarations reference this one in their body.

depends on (4)

Lean names referenced from this declaration's body.