pith. machine review for the scientific record. sign in

arxiv: 2307.14984 · v3 · submitted 2023-07-27 · 💻 cs.SI

Recognition: 2 theorem links

· Lean Theorem

S³: Social-network Simulation System with Large Language Model-Empowered Agents

Authors on Pith no claims yet

Pith reviewed 2026-05-17 11:24 UTC · model grok-4.3

classification 💻 cs.SI
keywords social network simulationlarge language model agentsagent-based modelingprompt engineeringinformation propagationemotion dynamicsattitude spreademergent phenomena
0
0 comments X

The pith

LLM agents in the S3 system emulate human perception and actions to produce emergent social network phenomena like information and emotion propagation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that large language models can power agents in a social network simulation by using prompt engineering to make their sensing, reasoning, and behavior closely match real humans. This matters for social science because such simulations can predict network states, explain how attitudes or emotions spread, and test policy effects without running real-world experiments. The system focuses on three behaviors—emotion, attitude, and interactions—while giving agents the ability to read the surrounding information environment. When run on real social network data, the simulations generate population-level patterns that match observed propagation of information, attitudes, and emotions at promising levels of accuracy.

Core claim

The S3 system constructs an agent-based social network simulator in which each agent, powered by a large language model, perceives the informational environment and emulates genuine human actions through carefully engineered and tuned prompts. By modeling emotion, attitude, and interaction behaviors together, the agents produce emergent population-level dynamics, including the spread of information, attitudes, and emotions across the network. Evaluation against real-world social network data at two simulation levels confirms that these dynamics align with observed patterns at encouraging accuracy.

What carries the argument

Prompt engineering and prompt tuning applied to LLM agents, which lets each agent perceive the informational environment and emulate human emotion, attitude, and interaction behaviors.

If this is right

  • Social scientists gain a tool for state prediction and phenomena explanation in networks without large-scale surveys.
  • Policy makers can test interventions by observing how simulated attitude or emotion spreads respond to changes in the informational environment.
  • The same agent framework extends to simulation systems outside social science, such as economic or political networks.
  • Two-level evaluation on real data provides a template for validating future LLM-based simulators against ground-truth traces.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Traditional rule-based agent models may be replaceable in many cases once LLM perception and action emulation reach this fidelity.
  • The approach could be extended to test how changes in network structure, such as adding or removing ties, alter propagation speed.
  • If the same prompt techniques work across different LLMs, the simulation cost could drop rapidly with newer model releases.
  • Hybrid systems might combine S3-style agents with classical diffusion equations to handle very large networks.

Load-bearing premise

Prompt engineering and prompt tuning are enough to make LLM agents emulate real human behavior in social networks closely enough that the resulting population-level patterns are meaningful.

What would settle it

Run the S3 system on the same real-world social network dataset but disable the agents' ability to perceive the informational environment; if the propagation of information, attitudes, and emotions no longer emerges or matches real patterns, the central claim is falsified.

read the original abstract

Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces the S³ (Social network Simulation System) that employs large language model-empowered agents to simulate social networks. Using prompt engineering and prompt tuning, agents are designed to perceive informational environments and emulate human behaviors across emotion, attitude, and interaction. The authors report observing emergent population-level phenomena such as the propagation of information, attitudes, and emotions, and claim promising accuracy when evaluated against real-world social network data at two simulation levels. This is positioned as an initial exploration of LLM-based agent simulations in social science.

Significance. If the micro-level agent behaviors can be shown to match human distributions rather than LLM artifacts, this approach could enable more flexible and interpretable simulations of complex social dynamics than traditional agent-based models that rely on hand-crafted rules or fitted parameters. The evaluation on real-world data is a positive step toward falsifiability, but the current lack of detailed validation limits the immediate impact.

major comments (2)
  1. [Evaluation] Evaluation section: the abstract states 'promising accuracy' on real-world data at two simulation levels, but provides no specific metrics, baselines, error bars, statistical tests, or details on how human-likeness was validated beyond aggregate match. This is load-bearing for the central claim that observed propagations reflect genuine social dynamics.
  2. [Agent Design] Agent design and prompt engineering sections: the reliance on prompt engineering to emulate human perception, reasoning, and action is not accompanied by robustness checks (e.g., ablation across prompt variants or alternative LLMs) or micro-level comparisons (e.g., response time distributions, emotional valence shifts, or interaction selectivity against human data). Without these, population emergence could arise from generic LLM tendencies rather than modeled social processes.
minor comments (2)
  1. [Abstract] The abstract would be clearer if it named the two simulation levels and the specific real-world datasets used.
  2. Notation for the three simulated aspects (emotion, attitude, interaction) should be introduced consistently when first defined.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. The comments highlight key areas where we can improve the presentation of our evaluation results and agent design choices. We address each major comment point by point below and indicate the planned revisions.

read point-by-point responses
  1. Referee: [Evaluation] Evaluation section: the abstract states 'promising accuracy' on real-world data at two simulation levels, but provides no specific metrics, baselines, error bars, statistical tests, or details on how human-likeness was validated beyond aggregate match. This is load-bearing for the central claim that observed propagations reflect genuine social dynamics.

    Authors: We agree that greater quantitative detail is needed to support the evaluation claims. The manuscript currently reports alignment between simulated and real-world data at the individual agent level (emotion and attitude updates) and the population level (propagation of information, attitudes, and emotions) using two real social network datasets. To address this concern, we will revise the abstract for precision and expand the evaluation section to include specific metrics (such as correlation coefficients and mean absolute errors for propagation trends), comparisons against baselines like random-walk or rule-based diffusion models, error bars from repeated simulation runs, and appropriate statistical tests for significance. These changes will more rigorously substantiate the observed phenomena. revision: yes

  2. Referee: [Agent Design] Agent design and prompt engineering sections: the reliance on prompt engineering to emulate human perception, reasoning, and action is not accompanied by robustness checks (e.g., ablation across prompt variants or alternative LLMs) or micro-level comparisons (e.g., response time distributions, emotional valence shifts, or interaction selectivity against human data). Without these, population emergence could arise from generic LLM tendencies rather than modeled social processes.

    Authors: This concern about potential LLM artifacts versus intentionally modeled social processes is well-taken. Our agent architecture uses distinct prompt modules grounded in social science concepts for environmental perception, attitude updating via influence mechanisms, and selective interaction. We will add robustness checks in the revision, including ablations that isolate prompt components (e.g., emotion versus attitude modules) and results across alternative LLMs. For micro-level comparisons such as response time distributions or interaction selectivity, the present work emphasizes aggregate emergence; obtaining fine-grained human distributional data for direct matching would require new empirical studies outside the scope of this initial exploration. We will add an explicit limitations discussion and future-work section addressing this gap. revision: partial

Circularity Check

0 steps flagged

No circularity: simulation outputs validated on external real-world data

full rationale

The paper describes an LLM-agent simulation system built via prompt engineering and tuning to emulate individual human behaviors in social networks, then reports population-level emergence of information/attitude/emotion propagation. These outputs are compared directly to independent real-world social network datasets at two simulation levels, with no equations, fitted parameters, or self-referential definitions that reduce the claimed accuracy or emergence to the inputs by construction. The methodology is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that current LLMs can be prompted to produce sufficiently human-like individual behaviors; no free parameters or invented entities are explicitly introduced in the abstract.

axioms (1)
  • domain assumption LLMs possess human-like capabilities in sensing, reasoning, and behaving that can be elicited via prompt engineering and tuning.
    Stated in the abstract as the foundation for constructing agents that emulate genuine humans.

pith-pipeline@v0.9.0 · 5563 in / 1124 out tokens · 23512 ms · 2026-05-17T11:24:12.286746+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 17 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AgentSocialBench: Evaluating Privacy Risks in Human-Centered Agentic Social Networks

    cs.AI 2026-04 unverdicted novelty 8.0

    AgentSocialBench demonstrates that privacy preservation is fundamentally harder in human-centered agentic social networks than in single-agent cases due to cross-domain coordination pressures and an abstraction parado...

  2. Mechanism Plausibility in Generative Agent-Based Modeling

    cs.MA 2026-05 unverdicted novelty 7.0

    Introduces the Mechanism Plausibility Scale to distinguish generative sufficiency from mechanistic plausibility in LLM-based agent-based models.

  3. Graph World Models: Concepts, Taxonomy, and Future Directions

    cs.AI 2026-04 unverdicted novelty 7.0

    The paper unifies emerging graph-based world models under a new paradigm and proposes a taxonomy organized by spatial, physical, and logical relational inductive biases.

  4. Agentic World Modeling: Foundations, Capabilities, Laws, and Beyond

    cs.AI 2026-04 unverdicted novelty 7.0

    Proposes a levels x laws taxonomy for world models in AI agents, defining L1-L3 capabilities across physical, digital, social, and scientific regimes while reviewing over 400 works to outline a roadmap for advanced ag...

  5. IntervenSim: Intervention-Aware Social Network Simulation for Opinion Dynamics

    cs.SI 2026-04 unverdicted novelty 7.0

    IntervenSim is an intervention-aware social network simulation that couples source interventions with crowd interactions in a feedback loop, improving MAPE by 41.6% and DTW by 66.9% over prior static frameworks on rea...

  6. AgentMark: Utility-Preserving Behavioral Watermarking for Agents

    cs.CR 2026-01 unverdicted novelty 7.0

    AgentMark watermarks agent planning behaviors with multi-bit identifiers via conditional sampling that preserves utility and works on black-box systems.

  7. Assessing Capabilities of Large Language Models in Social Media Analytics: A Multi-task Quest

    cs.CL 2026-04 unverdicted novelty 6.0

    LLMs show mixed results on authorship verification, post generation, and attribute inference from Twitter data, with new frameworks and user studies establishing benchmarks for these analytics tasks.

  8. Topology-Aware LLM-Driven Social Simulation: A Unified Framework for Efficient and Realistic Agent Dynamics

    cs.SI 2026-04 unverdicted novelty 6.0

    TopoSim integrates network topology into LLM agent simulations via backbone units and heterogeneous influence to cut token use 50-90% while improving fidelity to real-world structures.

  9. SOCIA-EVO: Automated Simulator Construction via Dual-Anchored Bi-Level Optimization

    cs.AI 2026-04 unverdicted novelty 6.0

    SOCIA-EVO generates statistically consistent simulators by separating structural refinement from parameter calibration via bi-level optimization and falsifying strategies through execution feedback in a Bayesian-weigh...

  10. Beyond Individual Mimicry: Constructing Human-Like Social network with Graph-Augmented LLM Agents

    cs.SI 2026-03 unverdicted novelty 6.0

    GraphMind equips LLM agents with graph awareness to construct human-like social networks, producing botnets that substantially degrade performance of both text-based and graph-based detectors.

  11. Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

    cs.SI 2026-02 unverdicted novelty 6.0

    LLM simulations of misinformation susceptibility overstate attitudinal associations and largely ignore personal network characteristics compared to human survey data.

  12. Cognitive Architectures for Language Agents

    cs.AI 2023-09 accept novelty 6.0

    CoALA is a modular cognitive architecture for language agents that organizes memory components, action spaces for internal and external interaction, and a generalized decision-making loop to support more systematic de...

  13. A Survey on Large Language Model based Autonomous Agents

    cs.AI 2023-08 accept novelty 6.0

    A survey of LLM-based autonomous agents that proposes a unified framework for their construction and reviews applications in social science, natural science, and engineering along with evaluation methods and future di...

  14. From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs

    cs.IR 2025-04 unverdicted novelty 5.0

    The paper surveys human memory categories, maps them to LLM memory, and proposes a new three-dimension (object, form, time) categorization into eight quadrants to organize existing work and highlight open problems.

  15. Network Effects and Agreement Drift in LLM Debates

    cs.SI 2026-04 unverdicted novelty 4.0

    LLM agents in controlled network debates show agreement drift toward specific opinion positions, requiring separation of structural effects from LLM biases before using them as human behavioral proxies.

  16. Large Language Model based Multi-Agents: A Survey of Progress and Challenges

    cs.CL 2024-01 unverdicted novelty 4.0

    The paper surveys LLM-based multi-agent systems, covering simulated domains, agent profiling and communication, mechanisms for capacity growth, and common benchmarks.

  17. A Survey on the Memory Mechanism of Large Language Model based Agents

    cs.AI 2024-04 accept novelty 3.0

    A systematic review of memory designs, evaluation methods, applications, limitations, and future directions for LLM-based agents.

Reference graph

Works this paper leans on

46 extracted references · 46 canonical work pages · cited by 17 Pith papers · 4 internal anchors

  1. [1]

    Using large language models to simulate multiple humans and replicate human subject studies

    Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans and replicate human subject studies. In International Conference on Machine Learning, pages 337–371. PMLR, 2023

  2. [2]

    Advancing the art of simulation in the social sciences

    Robert Axelrod. Advancing the art of simulation in the social sciences. In Simulating social phenomena, pages 21–40. Springer, 1997

  3. [3]

    Modeling echo chambers and polarization dynamics in social networks.Physical Review Letters, 124(4):048301, 2020

    Fabian Baumann, Philipp Lorenz-Spreen, Igor M Sokolov, and Michele Starnini. Modeling echo chambers and polarization dynamics in social networks.Physical Review Letters, 124(4):048301, 2020

  4. [4]

    Emer- gence of polarized ideological opinions in multidimensional topic spaces

    Fabian Baumann, Philipp Lorenz-Spreen, Igor M Sokolov, and Michele Starnini. Emer- gence of polarized ideological opinions in multidimensional topic spaces. Physical Review X, 11(1):011012, 2021. 15

  5. [5]

    A guide to simulation, 1987

    Paul Bratley, Bennett L Fox, and Linus E Schrage. A guide to simulation, 1987

  6. [6]

    Language models are few-shot learners

    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020

  7. [7]

    Understanding social preferences with simple tests

    Gary Charness and Matthew Rabin. Understanding social preferences with simple tests. The quarterly journal of economics, 117(3):817–869, 2002

  8. [8]

    Scalable influence maximization in social networks under the linear threshold model

    Wei Chen, Yifei Yuan, and Li Zhang. Scalable influence maximization in social networks under the linear threshold model. In 2010 IEEE international conference on data mining, pages 88–97. IEEE, 2010

  9. [9]

    Cellular automata

    Bastien Chopard and Michel Droz. Cellular automata. Modelling of Physical, pages 6–13, 1998

  10. [10]

    PaLM: Scaling Language Modeling with Pathways

    Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022

  11. [11]

    Reaching a consensus

    Morris H DeGroot. Reaching a consensus. Journal of the American Statistical association , 69(345):118–121, 1974

  12. [12]

    GLM: General language model pretraining with autoregressive blank infilling

    Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. GLM: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335, Dublin, Ireland, May 2022. Association for Computational Linguistics

  13. [13]

    Palm 2 technical report, 2023

    Rohan Anil et al. Palm 2 technical report, 2023

  14. [14]

    Political polarization of news media and influencers on twitter in the 2016 and 2020 us presidential elections

    James Flamino, Alessandro Galeazzi, Stuart Feldman, Michael W Macy, Brendan Cross, Zhenkun Zhou, Matteo Serafino, Alexandre Bovet, Hernán A Makse, and Boleslaw K Szyman- ski. Political polarization of news media and influencers on twitter in the 2016 and 2020 us presidential elections. Nature Human Behaviour, pages 1–13, 2023

  15. [15]

    System dynamics and the lessons of 35 years

    Jay W Forrester. System dynamics and the lessons of 35 years. In A systems-based approach to policymaking, pages 199–240. Springer, 1993

  16. [16]

    Simulation for the social scientist

    Nigel Gilbert and Klaus Troitzsch. Simulation for the social scientist. McGraw-Hill Education (UK), 2005

  17. [17]

    Evaluating large language models in generating synthetic hci research data: a case study

    Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. Evaluating large language models in generating synthetic hci research data: a case study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–19, 2023

  18. [18]

    Minimum-sized influential node set selection for social networks under the independent cascade model

    Jing He, Shouling Ji, Raheem Beyah, and Zhipeng Cai. Minimum-sized influential node set selection for social networks under the independent cascade model. In Proceedings of the 15th ACM International Symposium on Mobile ad hoc Networking and Computing, pages 93–102, 2014

  19. [19]

    Quantifying ideological polarization on a network using generalized euclidean distance

    Marilena Hohmann, Karel Devriendt, and Michele Coscia. Quantifying ideological polarization on a network using generalized euclidean distance. Science Advances, 9(9):eabq2044, 2023

  20. [20]

    Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023

    John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023

  21. [21]

    A simulation model of police patrol operations: program description

    Peter Kolesar and Warren E Walker. A simulation model of police patrol operations: program description. 1975

  22. [22]

    All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda

    Lik-Hang Lee, Tristan Braud, Pengyuan Zhou, Lin Wang, Dianlei Xu, Zijun Lin, Abhishek Kumar, Carlos Bermejo, and Pan Hui. All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda. arXiv preprint arXiv:2110.05352, 2021

  23. [23]

    Emer- gence of polarization in coevolving networks

    Jiazhen Liu, Shengda Huang, Nathaniel M Aden, Neil F Johnson, and Chaoming Song. Emer- gence of polarization in coevolving networks. Physical Review Letters, 130(3):037401, 2023

  24. [24]

    P- tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks

    Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P- tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland, May 2022. Association for Computational Li...

  25. [25]

    A systematic review of worldwide causal and correlational evidence on digital media and democracy

    Philipp Lorenz-Spreen, Lisa Oswald, Stephan Lewandowsky, and Ralph Hertwig. A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature human behaviour, 7(1):74–101, 2023

  26. [26]

    Information propagation

    Stefan Luding. Information propagation. Nature, 435(7039):159–160, 2005

  27. [27]

    Using system dynamics to model the social security system

    Lawrence C Marsh and Meredith Scovill. Using system dynamics to model the social security system. In NBER Workshop on Policy Analysis with Social Security Research Files , pages 15–17, 1978

  28. [28]

    Dynamics of growth in a finite world

    Dennis L Meadows, William W Behrens, Donella H Meadows, Roger F Naill, Jørgen Randers, and Erich Zahn. Dynamics of growth in a finite world. Wright-Allen Press Cambridge, MA, 1974

  29. [29]

    Universality, criticality and complexity of information propagation in social media

    Daniele Notarmuzi, Claudio Castellano, Alessandro Flammini, Dario Mazzilli, and Filippo Radicchi. Universality, criticality and complexity of information propagation in social media. Nature communications, 13(1):1308, 2022

  30. [30]

    Predicting opinion dynamics via sociologically-informed neural networks

    Maya Okawa and Tomoharu Iwata. Predicting opinion dynamics via sociologically-informed neural networks. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining, pages 1306–1316, 2022

  31. [31]

    Gpt-4 technical report, 2023

    OpenAI. Gpt-4 technical report, 2023

  32. [32]

    Generative Agents: Interactive Simulacra of Human Behavior

    Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023

  33. [33]

    Deepinf: Social influence prediction with deep learning

    Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, and Jie Tang. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’18, page 2110–2119, New York, NY , USA, 2018. Association for Computing Machinery

  34. [34]

    Status quo bias in decision making

    William Samuelson and Richard Zeckhauser. Status quo bias in decision making. Journal of risk and uncertainty, 1:7–59, 1988

  35. [35]

    Link recommendation algorithms and dynamics of polarization in online social networks

    Fernando P Santos, Yphtach Lelkes, and Simon A Levin. Link recommendation algorithms and dynamics of polarization in online social networks. Proceedings of the National Academy of Sciences, 118(50):e2102141118, 2021

  36. [36]

    Spinning the web of hate: Web-based hate propagation by extremist organizations

    Joseph A Schafer. Spinning the web of hate: Web-based hate propagation by extremist organizations. Journal of Criminal Justice and Popular Culture, 2002

  37. [37]

    Effects of age and gender on blogging

    Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199–205, 2006

  38. [38]

    The effect of oil discoveries on the british economy—theoretical ambiguities and the consistent expectations simulation approach

    Peter D Spencer. The effect of oil discoveries on the british economy—theoretical ambiguities and the consistent expectations simulation approach. The Economic Journal, 94(375):633–644, 1984

  39. [39]

    LLaMA: Open and Efficient Foundation Language Models

    Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023

  40. [40]

    Social science microsimulation

    Klaus G Troitzsch. Social science microsimulation. Springer Science & Business Media, 1996

  41. [41]

    Global evidence of expressed sentiment alterations during the covid-19 pandemic

    Jianghao Wang, Yichun Fan, Juan Palacios, Yuchen Chai, Nicolas Guetta-Jeanrenaud, Nick Obradovich, Chenghu Zhou, and Siqi Zheng. Global evidence of expressed sentiment alterations during the covid-19 pandemic. Nature Human Behaviour, 6(3):349–358, 2022

  42. [42]

    Detecting and modelling real percolation and phase transitions of information on social media

    Jiarong Xie, Fanhui Meng, Jiachen Sun, Xiao Ma, Gang Yan, and Yanqing Hu. Detecting and modelling real percolation and phase transitions of information on social media. Nature Human Behaviour, 5(9):1161–1168, 2021

  43. [43]

    V oting models in random networks

    Mehmet E Yildiz, Roberto Pagliari, Asuman Ozdaglar, and Anna Scaglione. V oting models in random networks. In 2010 information theory and applications workshop (ITA), pages 1–7. IEEE, 2010

  44. [44]

    Neural dynamics on complex networks

    Chengxi Zang and Fei Wang. Neural dynamics on complex networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 892–902, 2020. 17

  45. [45]

    GLM-130B: An Open Bilingual Pre-trained Model

    Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022

  46. [46]

    Who influenced you? predicting retweet via social influence locality

    Jing Zhang, Jie Tang, Juanzi Li, Yang Liu, and Chunxiao Xing. Who influenced you? predicting retweet via social influence locality. ACM Trans. Knowl. Discov. Data, 9(3), apr 2015. 18