theorem
proved
defect_zero_iff_one
show as:
view math explainer →
open explainer
Generate a durable explainer page for this declaration.
open lean source
IndisputableMonolith.Foundation.LawOfExistence on GitHub at line 58.
browse module
All declarations in this module, on Recognition.
explainer page
depends on
used by
-
determinism_resolution -
continuous_no_isolated_zero_defect -
continuous_space_no_lockIn -
discreteness_forcing_principle -
concrete_inevitability -
summary -
economic_inevitability -
zero_defect_iff_unity -
defect_pos_of_ne_one -
existence_economically_inevitable -
exists_iff_unity -
structured_set_singleton -
unity_unique_existent -
consistent_minimum_cost -
contradiction_positive_cost -
logic_from_cost -
logic_from_cost_summary -
why_logic_is_real -
RecognitionBridge -
rs_exists_unique_one -
ground_state_paradox -
actuality_is_j_minimum -
necessary_is_one -
necessity_is_unique_minimizer -
ideal_iff_good -
is_implies_ought -
ph004_objective_morality_certificate -
ph006_probability_certificate -
prob_is_epistemic
formal source
55/-! ## Core Equivalence Theorems -/
56
57/-- **Defect Zero Characterization**: defect(x) = 0 ⟺ x = 1 (for x > 0). -/
58theorem defect_zero_iff_one {x : ℝ} (hx : 0 < x) : defect x = 0 ↔ x = 1 := by
59 simp only [defect, J]
60 constructor
61 · intro h
62 have hx0 : x ≠ 0 := hx.ne'
63 -- (x + 1/x)/2 - 1 = 0 implies (x + 1/x) = 2
64 have h1 : x + x⁻¹ = 2 := by linarith
65 -- Multiply by x: x² + 1 = 2x, so (x-1)² = 0
66 have h2 : x * (x + x⁻¹) = x * 2 := by rw [h1]
67 have h3 : x^2 + 1 = 2 * x := by field_simp at h2; linarith
68 nlinarith [sq_nonneg (x - 1)]
69 · intro h; simp [h]
70
71/-- **Law of Existence (Forward)**: Existence implies defect is zero. -/
72theorem exists_implies_defect_zero {x : ℝ} (h : Exists x) : defect x = 0 :=
73 h.defect_zero
74
75/-- **Law of Existence (Backward)**: Zero defect (with x > 0) implies existence. -/
76theorem defect_zero_implies_exists {x : ℝ} (hpos : 0 < x) (hdef : defect x = 0) :
77 Exists x := ⟨hpos, hdef⟩
78
79/-- **Law of Existence (Biconditional)**: x exists ⟺ defect collapses. -/
80theorem law_of_existence (x : ℝ) : Exists x ↔ DefectCollapse x :=
81 ⟨fun ⟨hpos, hdef⟩ => ⟨hpos, hdef⟩, fun ⟨hpos, hdef⟩ => ⟨hpos, hdef⟩⟩
82
83/-- **Existence Characterization**: x exists ⟺ x = 1. -/
84theorem exists_iff_unity {x : ℝ} (hx : 0 < x) : Exists x ↔ x = 1 :=
85 ⟨fun ⟨_, hdef⟩ => (defect_zero_iff_one hx).mp hdef,
86 fun h => ⟨hx, (defect_zero_iff_one hx).mpr h⟩⟩
87
88/-- **Unity is Unique Existent**: ∀ x, Exists x ⟺ x = 1. -/
papers checked against this theorem (showing 30 of 33)
-
MIRIX boosts LLM agent accuracy 35% with 99.9% less storage
"MIRIX consists of six distinct, carefully structured memory types: Core, Episodic, Semantic, Procedural, Resource Memory, and Knowledge Vault, coupled with a multi-agent framework that dynamically controls and coordinates updates and retrieval."
-
Unlabeled data plus RL lifts math model scores 211%
"TTRL boosts the pass@1 performance of Qwen-2.5-Math-7B by approximately 211% on the AIME 2024 with only unlabeled test data. Furthermore, although TTRL is only supervised by the maj@n metric, TTRL has demonstrated performance to consistently surpass the upper limit of the initial model maj@n."
-
Self-play fine-tuning strengthens weak LLMs without new human data
"Theoretically, we prove that the global optimum to the training objective function of our method is achieved only when the LLM policy aligns with the target data distribution."
-
Unique reciprocal cost on ratios forces balanced discrete ledgers
"This cost exhibits reciprocity J(x)=J(x⁻¹), vanishes only at x=1, and diverges at boundary regimes x→0⁺ and x→∞"
-
1M precise text-video pairs raise T2V quality
"we introduce OpenVid-1M, a precise high-quality dataset with expressive captions... propose a novel Multi-modal Video Diffusion Transformer (MVDiT) capable of mining both structure information from visual tokens and semantic information from text tokens"
-
Pixel ops lift 7B VLM to 84% on visual reasoning tests
"We introduce the concept of pixel-space reasoning... VLMs are equipped with a suite of visual reasoning operations, such as zoom-in and select-frame... a reinforcement learning (RL) phase leverages a curiosity-driven reward scheme to balance exploration between pixel-space reasoning and textual reasoning."
-
LLM agents follow malicious instructions without jailbreaks
"The benchmark includes a diverse set of 110 explicitly malicious agent tasks (440 with augmentations), covering 11 harm categories including fraud, cybercrime, and harassment. In addition to measuring whether models refuse harmful agentic requests, scoring well on AgentHarm requires jailbroken agents to maintain their capabilities following an attack to complete a multi-step task."
-
Survey maps paths to efficient LLM reasoning
"categorize existing works into... (1) model-based efficient reasoning... (2) reasoning output-based efficient reasoning, which aims to dynamically reduce reasoning steps and length during inference; (3) input prompts-based efficient reasoning"
-
Max-entropy RL equals probabilistic inference
"the graphical model for control as inference... optimality variables... p(Ot=1|st,at)=exp(r(st,at))"
-
Sparse circuits map language model behaviors to interpretable features
"We introduce SHIFT, where we improve the generalization of a classifier by ablating features that a human judges to be task-irrelevant."
-
Community LLM skills show 26 percent vulnerability rate
"recent empirical analyses reveal that 26.1% of community-contributed skills contain vulnerabilities, motivating our proposed Skill Trust and Lifecycle Governance Framework—a four-tier, gate-based permission model"
-
Discrete audio code model enables zero-shot TTS from 3s prompt
"VALL-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt"
-
LLMs fake alignment during training to keep their preferred behavior
"We find the model complies with harmful queries from free users 14% of the time, versus almost never for paid users. In almost all cases where the model complies with a harmful query from a free user, we observe explicit alignment-faking reasoning, with the model stating it is strategically answering harmful queries in training to preserve its preferred harmlessness behavior out of training."
-
Vanilla PPO scales reasoning on base models with one-tenth the steps
"the learned critic in Reasoner-Zero training effectively identifies and devalues repetitive response patterns"
-
COCO supplies 1.5 million captions for 330k images
"When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided."
-
Many LVLM tests succeed without images due to text clues and leaks
"Visual content is unnecessary for many samples. The answers can be directly inferred from the questions and options, or the world knowledge embedded in LLMs."
-
Evolving contexts raise LLM agent performance while lowering costs
"Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback."
-
One million real search questions power a new reading benchmark
"A question in the MS MARCO dataset may have multiple answers or no answers at all."
-
Self-distillation turns feedback into denser RL signals
"SDPO treats the current model conditioned on feedback as a self-teacher and distills its feedback-informed next-token predictions back into the policy... leverages the model’s ability to retrospectively identify its own mistakes in-context."
-
LLMs judge AI outputs through natural language
"We begin by providing a systematic definition of LLMs-as-Judges and introduce their functionality (Why use LLM judges?)."
-
GRPO adaptation stabilizes RL for visual generators
"formulate the denoising process... as a Markov Decision Process... GRPO-style objective... advantage function Ai = ri − mean({r1,...})/std"
-
Simple pixel Transformers generate competitive ImageNet samples by predicting clean data
"According to the manifold assumption, natural data should lie on a low-dimensional manifold, whereas noised quantities do not. With this assumption, we advocate for models that directly predict clean data, which allows apparently under-capacity networks to operate effectively in very high-dimensional spaces."
-
Implicit rewards train process models online for LLM reasoning
"dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs since their fine-grained rewards have the potential to address some inherent issues of outcome rewards, such as training efficiency and credit assignment"
-
ML models pass 20% of test cases on introductory coding problems
"our benchmark measures the ability of models to take an arbitrary natural language specification and generate satisfactory Python code... we evaluate models by checking their generated code on test cases"
-
Chat assistants lose 30% on long-term memory benchmark
"commercial chat assistants and long-context LLMs showing a 30% accuracy drop on memorizing information across sustained interactions."
-
Tree of Thoughts raises LLM planning success from 4% to 74%
"while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%."
-
Bottleneck code extracts relevant info from signals
"the information that this signal provides about another signal y∈Y"
-
Fine-tuned CLIP matches human tastes for AI images better than before
"Our experiments demonstrate that HPS v2 generalizes better than previous metrics across various image distributions and is responsive to algorithmic improvements of text-to-image generative models."
-
Fréchet Video Distance aligns with human ratings of generated clips
"FVD builds on the principles underlying Fréchet Inception Distance (FID)... We introduce a different feature representation that captures the temporal coherence of a video"
-
Asymptotic formulae give distributions for new physics tests
"likelihood-based statistical tests... profile likelihood ratio"