pith. machine review for the scientific record. sign in

arxiv: 2603.18203 · v4 · submitted 2026-03-18 · 💻 cs.CL · cs.CY

Recognition: no theorem link

How Psychological Learning Paradigms Shaped and Constrained Artificial Intelligence

Authors on Pith no claims yet

Pith reviewed 2026-05-15 09:11 UTC · model grok-4.3

classification 💻 cs.CL cs.CY
keywords systematic compositional reasoningpsychological learning paradigmsAI architecturebehaviourism cognitivism constructivismReSynth frameworkauxiliary hypotheseschain-of-thought prompting
0
0 comments X

The pith

AI's failure at systematic compositional reasoning is architectural, inherited from behaviorism, cognitivism, and constructivism rather than fixable by scale or prompting.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that current AI systems cannot reliably recombine known components into novel configurations because their core architectures were shaped by psychological learning theories that omitted necessary structural features. Behaviourism excluded internal structure, cognitivism left representations opaque, and constructivism provided no formal construction operators, so techniques such as chain-of-thought prompting and human-feedback alignment function only as auxiliary patches. The authors trace this genealogy explicitly and conclude that systematicity will remain an after-the-fact correction until architectures are redesigned. They introduce ReSynth as a trimodular framework that separates reasoning, identity, and memory so that systematic behaviour becomes a direct consequence of the design. A reader should care because the argument implies that simply enlarging models or adding data will not overcome the deficit if the underlying architectural indifference persists.

Core claim

The central claim is that the inability of AI to exhibit systematic compositional reasoning is not a matter of insufficient scale or data but an architectural consequence of the psychological learning paradigms that informed AI methodology. Behaviourism bequeathed the exclusion of internal structure, cognitivism the opacity of representations, and constructivism the absence of formal construction operators. As a result, proliferating corrective methods such as chain-of-thought prompting and alignment through human feedback operate as auxiliary hypotheses that address symptoms without altering the underlying indifference to systematicity. The paper proposes ReSynth, a trimodular conceptual框架,

What carries the argument

ReSynth, the trimodular conceptual framework that separates reasoning, identity, and memory so systematic behaviour becomes a structural consequence of the architecture rather than a post-hoc correction.

If this is right

  • Corrective techniques such as chain-of-thought prompting and reinforcement learning from human feedback will remain insufficient because they do not alter the architectural source of the deficit.
  • AI development must incorporate internal structure, transparent representations, and formal construction operators if systematicity is to arise by design.
  • A cross-cultural reappraisal of rote learning offers an underexploited route to supplying the missing formal operators.
  • Scaling models or increasing training data alone will not produce architectures in which systematic behaviour is a necessary consequence rather than an auxiliary result.
  • Future systems should treat the separation of reasoning, identity, and memory as a foundational design principle rather than an optional module.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the architectural diagnosis holds, current scaling trajectories in language models may reach a plateau on tasks requiring genuine novelty rather than pattern completion.
  • The argument suggests that alternative psychological or philosophical learning theories beyond the three dominant ones could supply additional structural ingredients currently missing from AI.
  • Implementing the ReSynth separation in an actual system would allow direct tests of whether the trimodular split is sufficient to produce systematic recombinations on held-out configurations.
  • The same logic could be applied to other cognitive capacities, such as causal reasoning or analogical mapping, that also depend on explicit structural recombination.

Load-bearing premise

The assumption that AI's deficit in systematic compositional reasoning is architectural in origin and traces directly to the structural limitations bequeathed by behaviourism, cognitivism, and constructivism.

What would settle it

An existing connectionist or transformer-based model, trained at current scales without ReSynth-style separation of reasoning, identity, and memory, that reliably produces correct outputs on novel recombinations of components never seen together during training would falsify the architectural-indifference claim.

read the original abstract

Current artificial intelligence systems struggle with systematic compositional reasoning: the capacity to recombine known components in novel configurations. This paper argues that the failure is architectural, not merely a matter of scale or training data, and that its origins lie in the psychological learning theories from which AI paradigms were derived. The argument proceeds in three stages. First, drawing on the systematicity debate in cognitive science and on the demonstration of Aizawa that neither connectionism nor classicism can make systematicity a structural consequence of the architecture, the paper establishes that the corrective techniques proliferating in modern AI, from chain-of-thought prompting to alignment through human feedback, function as auxiliary hypotheses that address symptoms without resolving the underlying architectural indifference to systematicity. Second, it traces the genealogy from psychological learning theory to AI methodology, showing that behaviourism, cognitivism, and constructivism each bequeathed a specific structural limitation to the AI paradigm it inspired: the exclusion of internal structure, the opacity of representation, and the absence of formal construction operators. A cross-cultural reappraisal of rote learning reveals a further underexploited pathway. Third, the paper introduces ReSynth, a trimodular conceptual framework that proposes the principled separation of reasoning, identity, and memory as a path toward architectures in which systematic behaviour is a structural consequence of design rather than a correction applied after the fact.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that current AI systems' failure at systematic compositional reasoning is architectural rather than a matter of scale or data, with roots in the structural limitations bequeathed by behaviourism (exclusion of internal structure), cognitivism (representational opacity), and constructivism (absence of formal operators). It argues that techniques such as chain-of-thought prompting and RLHF function only as auxiliary hypotheses addressing symptoms, draws on the systematicity debate and Aizawa's demonstration to support this, and proposes the ReSynth trimodular framework (separating reasoning, identity, and memory) to make systematicity a structural consequence of design.

Significance. If the genealogical links and the architectural diagnosis hold, the work would reframe AI limitations as historically contingent rather than inevitable, offering a path to architectures in which systematic behaviour is designed in rather than corrected post hoc. The explicit engagement with the systematicity debate and the concrete proposal of ReSynth provide a falsifiable direction for future architectural work that could be tested against existing connectionist and symbolic baselines.

major comments (3)
  1. [Genealogy tracing (second stage)] The second stage traces behaviourism, cognitivism, and constructivism to specific structural limitations in AI but does not derive these limitations from concrete mechanisms such as the attention update rule, the back-propagation gradient, or the geometry of the token embedding space; without this derivation the claim that the observed failures are architectural inheritances rather than statistical or hardware constraints remains unproven.
  2. [First stage (systematicity debate and corrective techniques)] The assertion that chain-of-thought, RLHF, and related techniques are merely auxiliary hypotheses rests on the prior claim of architectural indifference; yet the manuscript provides no analysis showing why these methods cannot, in principle, induce the required structural properties through training dynamics or prompting, leaving the auxiliary-hypothesis diagnosis unsupported by examination of the model's inductive bias.
  3. [Third stage (ReSynth proposal)] ReSynth is introduced as a trimodular conceptual framework whose separation of reasoning, identity, and memory is said to render systematicity structural, but the manuscript supplies neither a formal specification of the modules nor a proof (or even a sketch) that this separation satisfies Aizawa's criterion that systematicity be a structural consequence rather than an emergent or corrected property.
minor comments (2)
  1. [Abstract] The abstract refers to 'Aizawa' without a full citation; the reference list should include the specific work invoked in the systematicity argument.
  2. [Second stage] The cross-cultural reappraisal of rote learning is mentioned only briefly; if it is intended to supply an additional pathway, it should be expanded with at least one concrete example of how it differs from the three main paradigms.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the constructive and detailed comments on our manuscript. We address each major comment point by point below, noting the revisions we intend to make in response.

read point-by-point responses
  1. Referee: [Genealogy tracing (second stage)] The second stage traces behaviourism, cognitivism, and constructivism to specific structural limitations in AI but does not derive these limitations from concrete mechanisms such as the attention update rule, the back-propagation gradient, or the geometry of the token embedding space; without this derivation the claim that the observed failures are architectural inheritances rather than statistical or hardware constraints remains unproven.

    Authors: We concur that strengthening the link between the psychological paradigms and concrete AI mechanisms would make the architectural diagnosis more robust. The manuscript currently focuses on the historical and conceptual lineage, but we will revise the second stage to include explicit mappings. For example, we will derive how behaviourism's rejection of internal states leads to the non-modular attention update rules in modern transformers, and how cognitivism's opaque representations persist in the geometry of token embeddings despite gradient-based training. This addition will address the concern that the failures might be statistical rather than architectural. revision: yes

  2. Referee: [First stage (systematicity debate and corrective techniques)] The assertion that chain-of-thought, RLHF, and related techniques are merely auxiliary hypotheses rests on the prior claim of architectural indifference; yet the manuscript provides no analysis showing why these methods cannot, in principle, induce the required structural properties through training dynamics or prompting, leaving the auxiliary-hypothesis diagnosis unsupported by examination of the model's inductive bias.

    Authors: The auxiliary hypothesis characterization is based on the systematicity debate and Aizawa's demonstration that current architectures lack the structural basis for systematicity. To support this further, we will add to the first stage an examination of the inductive biases in transformer models. Specifically, we will analyze why chain-of-thought prompting and RLHF cannot fundamentally alter the architecture to make systematicity structural, citing evidence that these methods improve performance on specific tasks but do not resolve generalization failures in novel compositions. This will include discussion of how training dynamics reinforce existing representational limitations. revision: yes

  3. Referee: [Third stage (ReSynth proposal)] ReSynth is introduced as a trimodular conceptual framework whose separation of reasoning, identity, and memory is said to render systematicity structural, but the manuscript supplies neither a formal specification of the modules nor a proof (or even a sketch) that this separation satisfies Aizawa's criterion that systematicity be a structural consequence rather than an emergent or corrected property.

    Authors: As a conceptual framework, ReSynth outlines a direction for future work rather than providing an implemented system. We will revise the third stage to include a sketch of the module specifications, describing the interfaces between reasoning, identity, and memory modules and how they enforce structural systematicity. We will also provide a brief argument demonstrating alignment with Aizawa's criterion by showing that systematic recombination becomes a direct outcome of the modular separation. However, a full formal proof would require detailed implementation and testing, which we note as future work. revision: partial

standing simulated objections not resolved
  • A full formal proof and implementation of ReSynth to rigorously satisfy Aizawa's criterion.

Circularity Check

0 steps flagged

No significant circularity detected in derivation chain

full rationale

The paper's argument proceeds via external citation to Aizawa's demonstration on systematicity (neither connectionism nor classicism makes it structural) and a historical/genealogical tracing of behaviourism, cognitivism and constructivism to specific architectural limitations. No equations, fitted parameters, or self-citations appear in the provided text that reduce any central claim to its own inputs by construction. The establishment that corrective techniques function as auxiliary hypotheses follows directly from the cited external demonstration rather than from a self-referential loop. Introduction of the ReSynth framework is presented as a forward proposal, not a derived prediction forced by prior steps. The derivation remains interpretive and self-contained against external benchmarks, with no load-bearing self-citation chains or renamings of known results.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The paper rests on the established systematicity debate in cognitive science and historical interpretations of learning theories; it introduces ReSynth as a new conceptual entity without new empirical anchors or derivations.

axioms (1)
  • domain assumption Neither connectionism nor classicism can make systematicity a structural consequence of the architecture (Aizawa demonstration)
    Invoked in the first stage to establish that corrective techniques are auxiliary hypotheses.
invented entities (1)
  • ReSynth no independent evidence
    purpose: Trimodular conceptual framework that separates reasoning, identity, and memory to make systematic behaviour structural
    Newly proposed architecture intended to overcome the limitations traced to psychological paradigms

pith-pipeline@v0.9.0 · 5556 in / 1387 out tokens · 51507 ms · 2026-05-15T09:11:05.276803+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

75 extracted references · 75 canonical work pages · 4 internal anchors

  1. [1]

    Springer (2009)

    Michalski, R.S., Carbonell, J.G., Mitchell, T.M.: Machine Learning: An Artificial Intelligence Approach. Springer (2009)

  2. [2]

    Prestianni, T.: Behaviorism learning theory (2023)

  3. [3]

    In: Encyclopedia of the Sciences of Learning

    Graham, S.: Behaviorism. In: Encyclopedia of the Sciences of Learning . Springer (2012)

  4. [4]

    International Publishers (1930)

    Pavlov, I.: Lectures on Conditioned Reflexes. International Publishers (1930)

  5. [5]

    Skinner’s Verbal Behavior

    Chomsky, N.: A review of B.F. Skinner’s Verbal Behavior . Language 35(1), 26–58 (1959)

  6. [7]

    Against Personalized AI Moral Advisors: Commentary on ‘Can AI Rely on the Systematicity of Truth?’ by Matthieu Queloz

    Leuenberger, M. Against Personalized AI Moral Advisors: Commentary on ‘Can AI Rely on the Systematicity of Truth?’ by Matthieu Queloz. Philosophy & Technology. 38 (2025), https://api.semanticscholar.org/CorpusID:277685745

  7. [8]

    On the Fundamental Limitations of AI Moral Advisors

    Queloz, M. On the Fundamental Limitations of AI Moral Advisors. Philosophy & Technology . 38, 71 (2025,5), https://doi.org/10.1007/s13347 -025-00896 -3

  8. [9]

    Perspectives on Psycholog - ical Science 1(2), 164–180 (2006)

    Bandura, A.: Toward a psychology of human agency. Perspectives on Psycholog - ical Science 1(2), 164–180 (2006)

  9. [10]

    Expert Systems with Applications 231 (2023)

    Shakya, S., et al.: Reinforcement learning algorithms: A brief survey. Expert Systems with Applications 231 (2023)

  10. [11]

    MIT Press (1998)

    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (1998)

  11. [12]

    Deubel, S.W.: An investigation of behaviorist and cognitive approaches to instruc - tional multimedia design. J. Educational Multimedia and Hypermedia 12(1) (2003)

  12. [13]

    Performance Improvement Quarterly 6(4), 50–72 (1993)

    Ertmer, P.A., Newby, T.J.: Behaviorism, cognitivism, constructivism: Comparing critical features. Performance Improvement Quarterly 6(4), 50–72 (1993)

  13. [14]

    Pearson (2012) 23

    Ormrod, J.E.: Human Learning, 6th edn. Pearson (2012) 23

  14. [15]

    Word embeddings: A survey,

    Almeida, E., Xexeo, G.: Word embeddings: A survey. arXiv:1901.09069 (2019)

  15. [16]

    Neural Computation 9(8), 1735–1780 (1997)

    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)

  16. [17]

    Neurocom - puting 452, 48–62 (2021)

    Niu, Z., et al.: A review on the attention mechanism of deep learning. Neurocom - puting 452, 48–62 (2021)

  17. [18]

    IEEE Trans

    Hospedales, T., et al.: Meta -learning in neural networks: A survey. IEEE Trans. Pattern Analysis and Machine Intelligence 44(9), 5149–5169 (2021)

  18. [19]

    Pearson Research Report (2011)

    Lai, E.R.: Metacognition: A literature review. Pearson Research Report (2011)

  19. [20]

    Alahmad, B.A.: A review of cogni tivism and its relationship with e-learning (2020)

  20. [21]

    A Path Towards Autonomous Machine Intelligence

    LeCun, Y. A Path Towards Autonomous Machine Intelligence. (2022), Open Review preprint, version 0.9.2

  21. [22]

    & Rosch, E

    Varela, F., Thompson, E. & Rosch, E. The Embodied Mind: Cognitive Science and Human Experience. (MIT Press,1991)

  22. [23]

    Psychology of Learning and Motivation 24, 109–165 (1989)

    McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks. Psychology of Learning and Motivation 24, 109–165 (1989)

  23. [24]

    Institute for Inquiry (2023)

    Hein, G.E.: Constructivist learning theory. Institute for Inquiry (2023)

  24. [25]

    Advances in Social Science, Education and Humanities Research (2021)

    Efgivia, T., et al.: Constructivism approach in learning. Advances in Social Science, Education and Humanities Research (2021)

  25. [26]

    In: Proceedings of ICML, pp

    Bengio, Y., et al.: Curriculum learning. In: Proceedings of ICML, pp. 41–48 (2009)

  26. [27]

    Artificial Intelligence Review 18, 77–95 (2002)

    Vilalta, R., Drissi, Y.: A perspective view and survey of meta -learning. Artificial Intelligence Review 18, 77–95 (2002)

  27. [28]

    Distilling the Knowledge in a Neural Network

    Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv:1503.02531 (2015)

  28. [29]

    Asian EFL Journal (2004)

    Li, X.: An analysis of Chinese EFL learners’ beliefs about the role of rote learning in vocabulary learning strategies. Asian EFL Journal (2004)

  29. [30]

    Journal of AI Research (2017)

    Ahmed, S., et al.: Comparative analysis of learning approaches. Journal of AI Research (2017)

  30. [31]

    In: The Chinese Learner (1999)

    Biggs, J.: Western misperceptions of the Confucian -heritage learning culture. In: The Chinese Learner (1999)

  31. [32]

    Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. National Academy of Sciences 114(13), 3521–3526 (2017) 24

  32. [33]

    PLoS Computational Biology 11(4), e1004128 (2015)

    Ellefsen, K.O., Mouret, J.B., Clune, J.: Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Computational Biology 11(4), e1004128 (2015)

  33. [34]

    arXiv:1912.04508 (2019)

    Aly, R., Dugan, T.: Reducing catastrophic forgetting in modular neural networks by dynamic information balancing. arXiv:1912.04508 (2019)

  34. [35]

    Not all llm reasoners are created equal.arXiv preprint arXiv:2410.01748,

    Hosseini, A., Sordoni, A., Toyama, D., Courville, A. & Agarwal, R. Not All LLM Reasoners Are Created Equal. (2024), https://arxiv.org/abs/2410.01748

  35. [36]

    ACL Findings, pp

    Li, Z., et al.: Understanding and patching compositional reasoning in LLMs. ACL Findings, pp. 9668–9688 (2024)

  36. [37]

    & Cioffi, J

    Mohsin, M., Umer, M., Bilal, A., Memon, Z., Qadir, M., Bhattacharya, S., Rizwan, H., Gorle, A., Kazmi, M., Amir, N., Subhan, A., Rafique, M., He, Z., Mehta, P., Jamshed, M. & Cioffi, J. On the Fundamental Limits of LLMs at Scale. (2026), https://arxiv.org/abs/2511.12869

  37. [38]

    Cognition 28, 3–71 (1988)

    Fodor, J.A., Pylyshyn, Z.W.: Connectionism and cognitive architecture: A critical analysis. Cognition 28, 3–71 (1988)

  38. [39]

    Mind and Language 12(2), 115–136 (1997)

    Aizawa, K.: Explaining systematicity. Mind and Language 12(2), 115–136 (1997)

  39. [40]

    NeurIPS (2022)

    Wei, J., et al.: Chain -of-thought prompting elicits reasoning in large language models. NeurIPS (2022)

  40. [41]

    NeurIPS (2020)

    Lewis, P., et al.: Retrieval -augmented generation for knowledge -intensive NLP tasks. NeurIPS (2020)

  41. [42]

    NeurIPS (2022)

    Ouyang, L., et al.: Training language models to follow instructions with human feedback. NeurIPS (2022)

  42. [43]

    On the Measure of Intelligence

    Chollet, F.: On the measure of intelligence. arXiv:1911.01547 (2019)

  43. [44]

    & Ponti, E

    Pfeiffer, J., Ruder, S., Vuli´c, I. & Ponti, E. Modular Deep Learning. (2024), https://arxiv.org/abs/2302.11529

  44. [45]

    arXiv preprint arXiv:2211.08411 , year=

    Kandpal, N., Deng, H., Roberts, A., Wallace, E. & Raffel, C. Large Language Models Struggle to Learn Long -Tail Knowledge. (2023), https://arxiv.org/abs/2211.08411

  45. [46]

    arXiv:1606

    Rusu, A.A., et al.: Progressive neural networks. arXiv:1606. 04671 (2016)

  46. [47]

    ACL (2020)

    Bender, E.M., Koller, A.: Climbing towards NLU: On meaning, form, and understanding in the age of data. ACL (2020)

  47. [48]

    arXiv:2002.06177 (2020) 25

    Marcus, G.: The next decade in AI: Four steps towards robust artificial intelli - gence. arXiv:2002.06177 (2020) 25

  48. [49]

    Harvard Univer - sity Press (1996)

    Dunbar, R.: Grooming, Gossip, and the Evolution of Language . Harvard Univer - sity Press (1996)

  49. [50]

    MIT Press (2008)

    Tomasello, M.: Origins of Human Communication. MIT Press (2008)

  50. [51]

    Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks

    Lake, B. & Baroni, M. Generalization without systematicity: On the compositional skills of sequence -to-sequence recurrent networks. (2018), https://arxiv.org/abs/1711.00350

  51. [52]

    & Balestriero, R

    Huang, H., LeCun, Y. & Balestriero, R. Semantic Tube Prediction: Beating LLM Data Efficiency with JEPA. (2026), https://arxiv.org/abs/2602.22617

  52. [53]

    Anderson, J.R.: How Can the Human Mind Occur in the Physical Universe? Oxford University Press (2007)

  53. [54]

    MIT Press (2012)

    Laird, J.E.: The Soar Cognitive Architecture. MIT Press (2012)

  54. [55]

    PLDI, pp

    Ellis, K., Wong, C., Nye, M., et al.: DreamCoder: Bootstrapping inductive program synthesis with wake -sleep library learning. PLDI, pp. 835 –850 (2021)

  55. [56]

    Artificial Intelligence Review 56, 12387 –12406 (2023)

    Garcez, A.d., Lamb, L.C.: Neurosymbolic AI: The 3rd wave. Artificial Intelligence Review 56, 12387 –12406 (2023)

  56. [57]

    IJCAI Survey Track (2020)

    Lamb, L.C., Garcez, A.d., Gori, M., et al.: Graph neural networks meet neural - symbolic computing: A survey and perspective. IJCAI Survey Track (2020)

  57. [58]

    Cambridge University Press (1988)

    Baars, B.J.: A Cognitive Theory of Consciousness . Cambridge University Press (1988)

  58. [59]

    -P., Naccache, L.: The global neuronal workspace model of conscious access

    Dehaene, S., Changeux, J. -P., Naccache, L.: The global neuronal workspace model of conscious access. Biological Bulletin 221(1), 76–93 (2011)

  59. [60]

    Oxford University Press (2010)

    Shanahan, M.: Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds. Oxford University Press (2010)

  60. [61]

    Goyal, A., Bengio, Y.: Inductive biases for deep learning of higher-level cognition. Proc. Royal Society A 478(2266), 20210068 (2022)

  61. [62]

    Behavioral and Brain Sciences 36(3), 181–204 (2013)

    Clark, A.: Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 36(3), 181–204 (2013)

  62. [63]

    Oxford University Press (2015)

    Clark, A.: Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press (2015)

  63. [64]

    Routledge (2004) 26

    Boden, M.A.: The Creative Mind: Myths and Mechanisms, 2nd edn. Routledge (2004) 26

  64. [65]

    NeurIPS (2023)

    Conmy, A., Mavor-Parker, A.N., Lynch, A., et al.: Towards automated circuit discovery for mechanistic interpretability. NeurIPS (2023)

  65. [66]

    Transformer Circuits Thread, Anthropic (2022)

    Elhage, N., et al.: Toy models of superposition. Transformer Circuits Thread, Anthropic (2022)

  66. [67]

    Compu- tational Linguistics 48(1), 207–219 (2022)

    Belinkov, Y.: Probing classifiers: Promises, shortcomings, and advances. Compu- tational Linguistics 48(1), 207–219 (2022)

  67. [68]

    Springer (2013)

    Mac Lane, S.: Categories for the Working Mathematician, 2nd edn. Springer (2013)

  68. [69]

    Cambridge University Press (2019)

    Fong, B., Spivak, D.I.: An Invitation to Applied Category Theory: Seven Ske tches in Compositionality. Cambridge University Press (2019)

  69. [70]

    Constitutional AI: Harmlessness from AI Feedback

    Bai, Y., et al.: Constitutional AI: Harmlessness from AI feedback. arXiv:2212.08073 (2022)

  70. [71]

    Cognition 35, 183–204 (1990)

    Fodor, J., McLaughlin, B.: Connectionism and the problem of systematicity: Why Smolensky’s solution doesn’t work. Cognition 35, 183–204 (1990)

  71. [72]

    NeurIPS (2017)

    Rockt¨aschel, T., Riedel, S.: End-to-end differentiable proving. NeurIPS (2017)

  72. [73]

    NeurIPS (2018)

    Manhaeve, R., Dumancic, S., Kimmig, A., et al.: DeepProbLog: Neural proba - bilistic logic programming. NeurIPS (2018)

  73. [74]

    PLDI (2023)

    Li, Z., et al.: Scallop: A language for neurosymbolic programming. PLDI (2023)

  74. [75]

    Presentation at AAAI 2020 Spring Symposium (2020)

    Marcus, G.: Neurosymbolic AI and common sense. Presentation at AAAI 2020 Spring Symposium (2020)

  75. [76]

    ICML (2017)

    Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. ICML (2017)