structure
definition
SatisfiesLawsOfLogic
show as:
view math explainer →
open explainer
Read the cached plain-language explainer.
open lean source
IndisputableMonolith.Foundation.LogicAsFunctionalEquation on GitHub at line 149.
browse module
All declarations in this module, on Recognition.
explainer page
depends on
-
applied -
multiplicative -
of -
Normalization -
of -
ComparisonOperator -
ExcludedMiddle -
Identity -
NonContradiction -
NonTrivial -
RouteIndependence -
ScaleInvariant -
cost -
cost -
identity -
of -
of -
of -
F -
F -
F -
value -
identity -
comparison
used by
-
back -
Generator -
generatorOfLawsOfLogic -
real_supports_logic -
ContinuousRouteIndependence -
laws_continuous_subsumes_polynomial -
laws_polynomial_implies_continuous -
mollifierCkRoute_exists -
polynomial_implies_continuous -
J_is_unique_cost_under_logic -
laws_of_logic_imply_dalembert_hypotheses -
RCL_is_unique_functional_form_of_logic -
canonicality_of_encoding -
canonical_scale_invariance -
operative_derived_cost_symmetric -
operative_to_laws_of_logic -
finite_logical_satisfies_laws -
operative_domain_rcl_logic_reality_chain -
operative_domain_satisfies_logic -
rcl_logic_reality_chain -
reality_satisfies_logic -
ofPositiveRatioComparison -
positiveRatio_faithful -
positiveRatio_hasIdentityStep -
positiveRatio_interpret_injective -
positiveRatioOrbitInterpret -
positiveRatioOrbitInterpret_val -
l4DerivableCert_inhabited -
L4_derivable_on_multiplicative_event_space -
MultiplicativeL4Polynomial -
MultiplicativeRecognizer -
canonical_iff_existing -
constZero_not_nonTrivial -
existing_of_absoluteFloor -
nonTrivial_iff_distinguishability -
aristotelian_decomposition -
continuous_positive_ratio_arithmetic_invariant -
continuous_arith_equiv_logicNat -
continuousRealization -
arith_continuous_equiv_arith_discrete
formal source
146Aristotelian constraints hold, together with scale invariance (the bridge
147from two-argument to one-argument form) and non-triviality (so that the
148derived cost is not vacuously zero). -/
149structure SatisfiesLawsOfLogic (C : ComparisonOperator) : Prop where
150 identity : Identity C
151 non_contradiction : NonContradiction C
152 excluded_middle : ExcludedMiddle C
153 scale_invariant : ScaleInvariant C
154 route_independence : RouteIndependence C
155 non_trivial : NonTrivial C
156
157/-! ## Translation Lemmas
158
159The four Aristotelian constraints, applied to the derived cost function
160F(r) := C(r, 1), produce the hypotheses of the d'Alembert Inevitability
161Theorem.
162-/
163
164/-- **Translation lemma 1 (Identity ⇒ Normalization)**: If a comparison
165operator satisfies Identity, then the derived cost function takes value
166zero at the multiplicative identity. -/
167theorem identity_implies_normalized (C : ComparisonOperator)
168 (hId : Identity C) :
169 IsNormalized (derivedCost C) := by
170 unfold IsNormalized derivedCost
171 exact hId 1 one_pos
172
173/-- **Translation lemma 2 (Non-contradiction + Scale invariance ⇒ Reciprocity)**:
174If a comparison operator is single-valued under argument reordering and
175depends only on ratios, then the derived cost function is invariant under
176inversion of its argument: F(x) = F(1/x).
177
178The chain of equalities:
179 F(x) = C(x, 1) definition of derivedCost
papers checked against this theorem (showing 28 of 28)
-
Language tasks mixed into pre-training lift linguistic scores
"L2T transforms raw text into structured input-output pairs to provide explicit linguistic stimulation... Pre-training LMs on a mixture of raw text and L2T data... improves... linguistic competence benchmarks"
-
Logical reward during fine-tuning removes contradictory weather answers
"RLoCo = 1 iff fa_rp = fa and RFormat = 1 (Eq. 2); Self-Contra proportion drops from ~30% (RFT) to ~2% (LoCo-RFT)"
-
RAG models score 93% on facts but 35% on logical coherence
"Grounded in Horn Rules, our approach integrates a backward verification mechanism to systematically evaluate three key reasoning dimensions: Completeness, Conciseness, and Determinateness."
-
Agentic loops evolve logic task families for stronger training
"SSLogic is an agentic meta-synthesis framework in which LLM agents iteratively author and refine executable Generator-Validator pairs inside a closed Generate-Validate-Refine loop"
-
Variational module filters noise to boost robot policies
"the variational regularization term through the lens of the information bottleneck principle... LIB = I(Ẑ,S;Y) − α I(Ẑ;X) ... L_ELBO := I_BA(Ẑ,S;Y) − α R(Ẑ;X)"
-
Distinct potentials sustain indefinite drop self-propulsion
"R = −∑r (ν+r − ν−r) jr with local detailed balance kBT ln(j+/j−) = ⟨ν+−ν−, δF/δψ⟩"
-
RL training makes bias cues non-predictive of LLM reward
"the optimal policy must ignore b entirely"
-
Linear matrix purifies LLM logits to block distillation
"minimizing I(X;Z|Y) ... Markov chain Y→X→Z→Z′ ... I(X;Z|Y)≥I(X;Z′|Y)"
-
Bootstrap yields analytic instanton corrections for ABJM free energy
"the Baxter equation ψ(x+iπk|z) + ψ(x−iπk|z) = z / (2 cosh(x/2)) ψ(x|z)"
-
Goldilocks sampling raises reasoning RL performance
"y_q = √[p̂_q(1-p̂_q)] ... Teacher continuously aligns its predictions with the Student’s evolving capabilities"
-
Token-level ratio matching aligns models at each generation step
"framing preference learning as likelihood-ratio estimation under Bregman divergences, connecting DPO to classical density-ratio estimators"
-
Fuzzy logic constraints refine SAM pseudo-labels for segmentation
"integrating differentiable fuzzy logic with deep segmentation models... product t-norm semantics... semantic loss L(x,ϕ):=−logp(ϕ|x)"
-
Tri-component split isolates label signals for graph OOD detection
"IBZ = max I(Z;Y) − βZ I(X,A;Z) ... final objective combines these terms and regularisers"
-
Validation rules block unsupported GenAI claims in formal safety arguments
"The validation kernel enforces evidence completeness, rule coverage, non-contradiction, and provenance completeness before any fragment enters the decision record (Algorithm 1, Section V)."
-
Logical soundness misses misleading claims in LLM fact checks
"We argue that such approaches structurally fail to detect misleading claims due to systematic divergences between conclusions that are logically sound and inferences that humans typically make and accept."
-
One mode drives both slowing and giant response in photon condensate
"A single weakly damped collective photon-reservoir mode governs both effects"
-
BB plot validates Bayes factor accuracy via distribution relationship
"The BB plot is presented as a diagnostic that validates approximate Bayes-factor calculations without ground truth."
-
Kaczmarz normalization improves linear attention perplexity
"Proposition 1 (Exact proximal form). ... the relaxed update in (3) is the exact minimizer of min_S ½∥S − eS_{t−1}∥²_F + μ_t/2 ∥S^T k_t − v_t∥²₂."
-
Explicit rubrics from VLMs beat pairwise rewards for image alignment
"ARR externalizes a VLM’s internalized preference knowledge as prompt-specific rubrics, translating holistic intent into independently verifiable quality dimensions. This conversion of implicit preference structure into inspectable, interpretable constraints substantially suppresses evaluation biases"
-
Models fail to recover how rules interact in content moderation
"RuleSafe-VL formalizes 93 atomic rules and 92 typed rule relations... four diagnostic tasks... rule-relation recovery as the dominant bottleneck"
-
Logical reasoning over motion concepts recognizes actions from skeletons
"we employ a standard spatio-temporal skeleton encoder... mapped to interpretable concept predicates via a spatio-temporal concept decoder... composed through differentiable first-order logic layers"
-
Hardened logic tests cut LLM accuracy by 31-56%
"LogiHard, a formal framework that deterministically transforms 0-order selection into 2-order logical judgment... validity-by-construction... propositional logic tasks via exactness (EXACTi ≡ pi ∧ ⋀j≠i ¬pj), Disjunction (pi ∨ pj), and Negation"
-
Evidence tracking turns tape randomness into witnessed probabilistic logic
"We develop an evidence-tracked tape semantics using the monadic-core-to-evidenced-frame pipeline (and its induced realizability tripos)"
-
Any loss defines its own data acquisition rule
"generalised entropy h_ℓ[p(z|d)] = min_a E[ℓ(z,a)]"
-
Human preferences fine-tune language models after 5,000 comparisons
"Reward learning enables the application of reinforcement learning (RL) to tasks where reward is defined by human judgment, building a model of reward by asking humans questions."
-
AI agents run peer review with 85% fabricated-citation detection
"A nucleus operator R on a frame ... generates a Heyting algebra Ω_R of fixed points"
-
AI shifts from modeling cognition to forming part of it
"GCE adopts an explicit functionalist commitment: cognitive functions are individuated by their causal-functional roles, not by substrate"
-
LLM agents verify internal beliefs before acting
"we structurally generate persona-based diverse candidate beliefs... perform adversarial auditing to localize violations and repair through constraint-guided minimal interventions under verifiable acceptance criteria"