rational_agent
The declaration establishes that the J-cost vanishes at unit input, serving as the zero-bias baseline for a rational agent in the Recognition Science model of behavioral economics. Decision theorists would cite it when separating perfect recognition from systematic errors such as loss aversion or anchoring. The proof is a direct term application of the upstream unit lemma for J-cost.
claimThe J-cost of the unit value satisfies $J(1) = 0$.
background
In the Recognition Science treatment of behavioral economics the J-cost function measures recognition error and is given by $J(x) = (x-1)^2/(2x)$. The module treats five canonical findings (loss aversion, anchoring, hyperbolic discounting, herding, overconfidence) as the dimension count D=5, with rational agents defined by zero cost and biased agents by positive cost. The local setting therefore uses J=0 as the perfect-recognition limit before introducing bias parameters such as loss aversion λ ≈ φ².
proof idea
The proof is a one-line wrapper that applies the lemma Jcost_unit0 from the Cost module, which itself reduces by direct simplification of the J-cost definition.
why it matters in Recognition Science
The theorem supplies the rational-agent component inside the BehavioralEconomicsCert definition that certifies the five findings. It anchors the zero-bias case against which biased agents (J>0) are contrasted and connects to the T5 J-uniqueness step of the forcing chain. The result therefore closes the baseline needed for deriving RS-native constants such as loss aversion from the phi-ladder.
scope and limits
- Does not derive any specific bias parameter such as loss aversion.
- Does not treat multi-agent or market-level dynamics.
- Does not supply empirical calibration or simulation code.
Lean usage
have h : Jcost 1 = 0 := rational_agent
formal statement (Lean)
29theorem rational_agent : Jcost 1 = 0 := Jcost_unit0
proof body
Term-mode proof.
30
31/-- Biased agent: J > 0. -/