Multi-Level Composition Forces the Golden Ratio
Uniform scaling across hierarchical recognition layers reduces to phi
Uniform scaling across hierarchical recognition layers reduces to phi.
Equations
[ J(xy)+J(x/y)=2J(x)J(y)+2J(x)+2J(y) ]
Recognition Composition Law.
Derivation chain (Lean anchors)
Each row links to the corresponding Lean 4 declaration in the Recognition Science canon. A resolved anchor has a green check; an unresolved anchor flags a registry/canon mismatch.
-
1 Hierarchy forces phi theorem checked
IndisputableMonolith.Foundation.HierarchyForcing.hierarchy_forced_gives_phiOpen theorem → -
2 Uniform scaling forced theorem checked
IndisputableMonolith.Foundation.HierarchyForcing.uniform_scaling_forcedOpen theorem → -
3 Additive composition minimal theorem checked
IndisputableMonolith.Foundation.HierarchyForcing.additive_composition_is_minimalOpen theorem →
Narrative
1. Setting
Multi-Level Composition Forces the Golden Ratio is anchored in Foundation.HierarchyForcing. The page is not a loose explainer: it is a public map from the Recognition Science forcing chain into one Lean-checked declaration bundle. The primary anchor determines what is proved, and the surrounding declarations show how the result is used.
2. Equations
(E1)
$$ J(xy)+J(x/y)=2J(x)J(y)+2J(x)+2J(y) $$
Recognition Composition Law.
3. Prediction or structural target
- Structural target:
Foundation.HierarchyForcingmust keep resolving in the Lean canon, and all downstream pages that cite this anchor must continue to type-check.
This page is currently a structural derivation. Where the claim has direct empirical content, the prediction table gives the measurable target; otherwise the claim is a formal bridge inside the Lean canon.
4. Formal anchor
The primary anchor is Foundation.HierarchyForcing..hierarchy_forced_gives_phi.
/-- The forced hierarchy yields σ = φ. -/
theorem hierarchy_forced_gives_phi
(M : NontrivialMultilevelComposition)
(no_free_scale : ∀ j k,
M.levels (j + 1) / M.levels j = M.levels (k + 1) / M.levels k)
(ratio_gt_one : 1 < M.levels 1 / M.levels 0)
(additive : M.levels 2 = M.levels 1 + M.levels 0) :
(hierarchy_forced M no_free_scale ratio_gt_one).ratio = PhiForcing.φ :=
hierarchy_emergence_forces_phi
(hierarchy_forced M no_free_scale ratio_gt_one)
5. What is inside the Lean module
Key theorems:
scale_perturbed_posscale_perturbed_lowscale_perturbed_family_injectiveuniform_scaling_forcedadditive_composition_is_minimalmin_max_achievedother_pairs_largerhierarchy_forced_gives_phi
Key definitions:
ScalePerturbedNontrivialMultilevelCompositionhierarchy_forced
6. Derivation chain
hierarchy_forced_gives_phi- Hierarchy forces phiuniform_scaling_forced- Uniform scaling forcedadditive_composition_is_minimal- Additive composition minimal
7. Falsifier
Producing a self-consistent multi-level recognition composition that admits a fixed point other than phi breaks hierarchy_forced_gives_phi.
8. Where this derivation stops
Below this page the chain reduces to the RS forcing sequence: J-cost uniqueness, phi forcing, the eight-tick cycle, and the D=3 recognition substrate. If any upstream theorem changes, this page must be versioned rather than patched silently. The published URL is stable, but the version field is the contract.
10. Audit path
To audit hierarchy-yields-phi, start with the primary Lean anchor Foundation.HierarchyForcing.hierarchy_forced_gives_phi. Then inspect the theorem names listed in the module-content section. The page is intentionally built so the public explanation is not a substitute for the proof object; it is a map into it. The mathematical dependency is the same in every case: reciprocal cost fixes J, J fixes the phi-ladder, the eight-tick cycle fixes the recognition clock, and the domain theorem listed above supplies the last step. If that last step is empirical, the falsifier section names what observation would break it. If that last step is formal, a Lean-checkable counterexample is the relevant failure mode.
Falsifier
Producing a self-consistent multi-level recognition composition that admits a fixed point other than phi breaks hierarchy_forced_gives_phi.
Pith papers using these anchors
References
-
lean
Recognition Science Lean library (IndisputableMonolith)
https://github.com/jonwashburn/shape-of-logic
Public Lean 4 canon used by Pith theorem pages. -
paper
Uniqueness of the Canonical Reciprocal Cost
Peer-reviewed paper anchoring the J-cost uniqueness theorem. -
spec
Recognition Science Full Theory Specification
https://recognitionphysics.org
High-level theory specification and public program context for Recognition Science derivations.
How to cite this derivation
- Stable URL:
https://pith.science/derivations/hierarchy-yields-phi - Version: 5
- Published: 2026-05-14
- Updated: 2026-05-15
- JSON:
https://pith.science/derivations/hierarchy-yields-phi.json - YAML source:
pith/derivations/registry/bulk/hierarchy-yields-phi.yaml
@misc{pith-hierarchy-yields-phi,
title = "Multi-Level Composition Forces the Golden Ratio",
author = "Recognition Physics Institute",
year = "2026",
url = "https://pith.science/derivations/hierarchy-yields-phi",
note = "Pith Derivations, version 5"
}