pith. machine review for the scientific record. sign in

arxiv: 1611.01855 · v1 · submitted 2016-11-06 · 💻 cs.AI · cs.PL

Recognition: unknown

Neuro-Symbolic Program Synthesis

Authors on Pith no claims yet
classification 💻 cs.AI cs.PL
keywords examplesprograminput-outputneuralprogramsableconstructgiven
0
0 comments X
read the original abstract

Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. SEVerA: Verified Synthesis of Self-Evolving Agents

    cs.LG 2026-03 unverdicted novelty 8.0

    SEVerA uses Formally Guarded Generative Models and a three-stage Search-Verification-Learning process to synthesize self-evolving agents that satisfy hard formal constraints while improving task performance.

  2. SynthFix: Adaptive Neuro-Symbolic Code Vulnerability Repair

    cs.SE 2026-04 unverdicted novelty 7.0

    SynthFix adaptively routes LLM code repairs to supervised fine-tuning or symbolic-reward fine-tuning, yielding up to 32% higher exact match on JavaScript and C vulnerability benchmarks.

  3. Overmind NSA: A Unified Neuro-Symbolic Computing Architecture with Approximate Nonlinear Activations and Preemptive Memory Bypass

    cs.AR 2026-04 unverdicted novelty 4.0

    Overmind is a neuro-symbolic architecture that uses adjustable Padé approximations and memory bypass to deliver 8.1 TOPS/W efficiency and 410 GOPS throughput on mixed workloads with minimal accuracy loss.