pith. machine review for the scientific record. sign in

arxiv: 1710.03748 · v3 · submitted 2017-10-10 · 💻 cs.AI

Recognition: unknown

Emergent Complexity via Multi-Agent Competition

Authors on Pith no claims yet
classification 💻 cs.AI
keywords environmentagentscomplexbehaviorscomplexityenvironmentslevelmulti-agent
0
0 comments X
read the original abstract

Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself. We also point out that such environments come with a natural curriculum, because for any skill level, an environment full of agents of this level will have the right level of difficulty. This work introduces several competitive multi-agent environments where agents compete in a 3D world with simulated physics. The trained agents learn a wide variety of complex and interesting skills, even though the environment themselves are relatively simple. The skills include behaviors such as running, blocking, ducking, tackling, fooling opponents, kicking, and defending using both arms and legs. A highlight of the learned behaviors can be found here: https://goo.gl/eR7fbX

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AI safety via debate

    stat.ML 2018-05 conditional novelty 8.0

    AI agents trained through competitive debate can allow polynomial-time human judges to oversee PSPACE-level questions, with MNIST experiments boosting sparse classifier accuracy from 59% to 89% using only 6 pixels.

  2. Dota 2 with Large Scale Deep Reinforcement Learning

    cs.LG 2019-12 accept novelty 7.0

    OpenAI Five achieved superhuman performance in Dota 2 by defeating the world champions using scaled self-play reinforcement learning.

  3. BehaviorGuard: Online Backdoor Defense for Deep Reinforcement Learning

    cs.AI 2026-05 unverdicted novelty 6.0

    BehaviorGuard detects backdoor behaviors in DRL policies via behavioral drift in action distributions and suppresses suspicious actions at runtime, claimed as the first online defense for both single- and multi-agent ...

  4. Competitor-aware Race Management for Electric Endurance Racing

    eess.SY 2026-03 unverdicted novelty 6.0

    A bi-level game-theoretic optimal control plus reinforcement learning framework enables competitor-aware energy management and pit-stop scheduling that exploits aerodynamic drafting in simulated electric endurance races.