sat_computation_time
The declaration asserts that the cellular automaton evaluation time for a SAT problem with n variables and n clauses is bounded above by a constant times n to the power one third times the logarithm of n. Complexity theorists examining the Recognition Science approach to P versus NP would reference this bound when analyzing the efficiency of the local rule simulation. The inequality follows immediately from the explicit formula used to define the evaluation time function.
claimFor all natural numbers $n > 0$, there exists a positive real constant $c$ such that the SAT evaluation time for $n$ variables and $n$ clauses satisfies $t(n,n) ≤ c · n^{1/3} · log n$, where $t$ denotes the evaluation time function.
background
The CellularAutomata module constructs one-dimensional cellular automata with local update rules to evaluate Boolean satisfiability instances, using gadgets based on Rule 110 for Boolean operations. The sat_eval_time definition computes the required steps explicitly as the ceiling of $n^{1/3}$ times the logarithm of $(n + m + 2)$, capturing depth of the clause tree and width propagation under optimal variable arrangement. The module relies on balanced ledgers from LedgerForcing to preserve event parity and on structures for tapes and neighborhoods that enforce locality and determinism. Upstream results include the explicit sat_eval_time formula and ledger factorization ensuring compatibility with the recognition ledger.
proof idea
The result follows directly from the definition of sat_eval_time. Substituting $m = n$ yields a ceiling of $n^{1/3}$ times log$(2n + 2)$, and the logarithm of $2n + 2$ is absorbed into a constant multiple of log $n$ for $n > 0$. The ceiling function is likewise bounded by adding a fixed additive constant that can be absorbed into $c$ for sufficiently large $c$.
why it matters in Recognition Science
The bound shows that SAT evaluation via local CA rules runs in $O(n^{1/3} log n)$ time, supporting the module's claim of sublinear computation through parallelism while output reading requires Ω(n) queries due to balanced-parity encoding. It contributes to the P versus NP resolution in Recognition Science by feeding the ComputationBridge for CA-to-TM simulation. The result aligns with the forcing chain landmarks on self-similar fixed points and eight-tick periodicity, though the direct connection occurs through the imported bridge rather than explicit phi-ladder steps.
scope and limits
- Does not establish a matching lower bound on evaluation time.
- Does not analyze the time to read the final SAT result from the automaton tape.
- Does not verify correctness of the local rules for arbitrary clause structures.
- Does not quantify simulation overhead when embedding the CA into a Turing machine.
formal statement (Lean)
233theorem sat_computation_time (n : ℕ) (hn : 0 < n) :
234 ∃ (c : ℝ), c > 0 ∧
235 (sat_eval_time n n : ℝ) ≤ c * n^(1/3 : ℝ) * Real.log n
236-/
237
238/-- **Recognition time** for SAT output (documentation / TODO): Ω(n) due to balanced-parity encoding.
239
240Intended claim: By balanced-parity hiding, reading fewer than n bits is insufficient to determine
241the SAT result. Any decoder reading a proper subset of the input bits will fail on at least
242one pair of tapes that match on the observed bits but differ in the total parity (and thus
243the result). -/
244/-!