The d'Alembert Recognition Composition Law
J satisfies J(xy)+J(x/y)=2J(x)J(y)+2J(x)+2J(y), uniquely
J satisfies J(xy)+J(x/y)=2J(x)J(y)+2J(x)+2J(y), uniquely. **Law of Logic cost theorem**: The J-cost function is the unique reciprocal cost satisfying the RCL, normalization, calibration, and continuity.
Predictions
| Quantity | Predicted | Units | Empirical | Source |
|---|---|---|---|---|
| canonical cost | unique J-cost |
dimensionless | Lean theorem |
Cost.FunctionalEquation.washburn_uniqueness_aczel |
Equations
[ J(xy)+J(x/y)=2J(x)J(y)+2J(x)+2J(y),\qquad J(x)=\frac12(x+x^{-1})-1 ]
Recognition Composition Law and its unique solution.
Derivation chain (Lean anchors)
Each row links to the corresponding Lean 4 declaration in the Recognition Science canon. A resolved anchor has a green check; an unresolved anchor flags a registry/canon mismatch.
-
1 Washburn-Aczel uniqueness theorem theorem checked
IndisputableMonolith.Cost.FunctionalEquation.washburn_uniqueness_aczelOpen theorem →
Narrative
1. Setting
The J-cost functional equation is the root of the entire derivations system. Its uniqueness theorem says that reciprocal symmetry, normalization, the Recognition Composition Law, continuity, and calibration leave only one possible cost function. All downstream constants inherit this rigidity.
2. Equations
(E1)
$$ J(xy)+J(x/y)=2J(x)J(y)+2J(x)+2J(y),\qquad J(x)=\frac12(x+x^{-1})-1 $$
Recognition Composition Law and its unique solution.
3. Prediction or structural target
- canonical cost: predicted unique J-cost (dimensionless); empirical Lean theorem. Source: Cost.FunctionalEquation.washburn_uniqueness_aczel
This entry is one of the marquee derivations. The numerical or formal target is explicit, and the falsifier identifies the failure mode.
4. Formal anchor
The primary anchor is Cost.FunctionalEquation..law_of_logic_forces_jcost.
5. What is inside the Lean module
Key theorems:
CoshAddIdentity_implies_DirectCoshAddG_even_of_reciprocal_symmetryG_zero_of_unitJcost_G_eq_cosh_sub_oneJcost_cosh_add_identityeven_deriv_at_zerodAlembert_evendAlembert_doubledAlembert_productdAlembert_diff_squaresub_one_eq_mul_ratiotendsto_H_one_of_log_curvature
Key definitions:
GHCoshAddIdentityDirectCoshAddHasLogCurvatureode_linear_regularity_bootstrap_hypothesisode_regularity_continuous_hypothesisode_regularity_differentiable_hypothesis
6. Derivation chain
law_of_logic_forces_jcost- Law of Logic cost theorem
7. Falsifier
A continuous reciprocal-symmetric calibrated function satisfying the RCL but not equal to J refutes the theorem.
8. Where this derivation stops
Below this page the chain reduces to the RS forcing sequence: J-cost uniqueness, phi forcing, the eight-tick cycle, and the D=3 recognition substrate. If any upstream theorem changes, this page must be versioned rather than patched silently. The published URL is stable, but the version field is the contract.
9. Reading note
The minimal way to audit this page is to open the first Lean anchor and then walk the supporting declarations listed above. If the primary theorem is a module-level anchor, the key theorems section names the internal declarations that carry the mathematical load. This keeps the public derivation readable without severing it from the proof object.
10. Audit path
To audit j-cost-functional-equation, start with the primary Lean anchor Cost.FunctionalEquation.law_of_logic_forces_jcost. Then inspect the theorem names listed in the module-content section. The page is intentionally built so the public explanation is not a substitute for the proof object; it is a map into it. The mathematical dependency is the same in every case: reciprocal cost fixes J, J fixes the phi-ladder, the eight-tick cycle fixes the recognition clock, and the domain theorem listed above supplies the last step. If that last step is empirical, the falsifier section names what observation would break it. If that last step is formal, a Lean-checkable counterexample is the relevant failure mode.
11. Why this belongs in the derivations corpus
The corpus is organized around load-bearing consequences, not around file names. This entry is included because Cost.FunctionalEquation contributes a reusable theorem or definitional bridge that other pages can cite. Keeping the page public gives readers a stable URL, a JSON record, and a direct path into the Lean theorem page. If the entry becomes redundant with a stronger derivation later, the current slug should be retired rather than silently rewritten; the replacement should absorb its anchors and preserve the audit history.
Falsifier
A continuous reciprocal-symmetric calibrated function satisfying the RCL but not equal to J refutes the theorem.
Related derivations
Pith papers using these anchors
References
-
lean
Recognition Science Lean library (IndisputableMonolith)
https://github.com/jonwashburn/shape-of-logic
Public Lean 4 canon used by Pith theorem pages. -
paper
Uniqueness of the Canonical Reciprocal Cost
Peer-reviewed paper anchoring the J-cost uniqueness theorem. -
spec
Recognition Science Full Theory Specification
https://recognitionphysics.org
High-level theory specification and public program context for Recognition Science derivations.
How to cite this derivation
- Stable URL:
https://pith.science/derivations/j-cost-functional-equation - Version: 6
- Published: 2026-05-14
- Updated: 2026-05-15
- JSON:
https://pith.science/derivations/j-cost-functional-equation.json - YAML source:
pith/derivations/registry/bulk/j-cost-functional-equation.yaml
@misc{pith-j-cost-functional-equation,
title = "The d'Alembert Recognition Composition Law",
author = "Recognition Physics Institute",
year = "2026",
url = "https://pith.science/derivations/j-cost-functional-equation",
note = "Pith Derivations, version 6"
}