pith. machine review for the scientific record. sign in
theorem proved tactic proof high

max_entropy_uniform

show as:
view Lean formalization →

Maximum entropy for the uniform distribution over n positive outcomes equals the natural logarithm of n under the Shannon entropy definition. Information theorists and Recognition Science researchers would cite this when linking entropy to J-cost minimization over distributions. The tactic proof reduces the claim by unfolding the entropy and uniform definitions, invoking the logarithm quotient identity, and closing with field simplification followed by the ring tactic.

claimFor any positive natural number $n$, the Shannon entropy $H = -p_1 log p_1 - ... - p_n log p_n$ of the uniform distribution (each $p_i = 1/n$) equals $log n$.

background

The module Information.ShannonEntropy derives Shannon entropy from Recognition Science J-cost. J-cost is given by $J(x) = (x + x^{-1})/2 - 1$, with the shifted form $H(x) = J(x) + 1 = (x + x^{-1})/2$ satisfying the d'Alembert equation $H(xy) + H(x/y) = 2 H(x) H(y)$ as stated in the CostAlgebra upstream definition. The uniform distribution assigns probability $1/n$ to each of $n$ outcomes; shannonEntropy applies the standard formula $-sum p_i log p_i$ to any probability distribution.

proof idea

The proof unfolds shannonEntropy and uniform, introduces positivity of $1/n$ via Nat.cast_pos, applies Real.log_div to obtain log(1/n) = -log n, reduces the scaled product via field_simp, and finishes with ring.

why it matters in Recognition Science

This result is invoked directly by uniform_has_max_entropy in InformationIsLedger, which records IC-001.12: maximum entropy occurs for the uniform distribution and equates it to maximum information cost. It completes the INFO-001 derivation of Shannon entropy from J-cost structure in the paper Foundation of physics - Information theory from RS, placing the uniform case inside the broader forcing chain where entropy quantifies deviation from uniformity.

scope and limits

Lean usage

  exact ShannonEntropy.max_entropy_uniform n hn

formal statement (Lean)

  92theorem max_entropy_uniform (n : ℕ) (hn : n > 0) :
  93    shannonEntropy (uniform n hn) = Real.log n := by

proof body

Tactic-mode proof.

  94  -- H = -n × (1/n) × log(1/n) = -log(1/n) = log(n)
  95  unfold shannonEntropy uniform
  96  simp only
  97  have hn_pos : (0 : ℝ) < n := Nat.cast_pos.mpr hn
  98  have hn_ne : (n : ℝ) ≠ 0 := ne_of_gt hn_pos
  99  have h_prob_pos : (1 : ℝ) / n > 0 := by positivity
 100  simp only [h_prob_pos, ↓reduceIte, Finset.sum_const, Finset.card_fin, nsmul_eq_mul]
 101  -- Goal: -(n * (1/n * log(1/n))) = log(n)
 102  have h_log : Real.log (1 / n) = -Real.log n := by
 103    rw [Real.log_div (by norm_num : (1 : ℝ) ≠ 0) hn_ne, Real.log_one, zero_sub]
 104  rw [h_log]
 105  have h_simp : (n : ℝ) * (1 / n * -Real.log n) = -Real.log n := by
 106    field_simp
 107  rw [h_simp]
 108  ring
 109
 110/-- **THEOREM**: Entropy is 0 for deterministic distribution. -/

used by (1)

From the project-wide theorem graph. These declarations reference this one in their body.

depends on (9)

Lean names referenced from this declaration's body.