Recognition: unknown
Solving Large Imperfect Information Games Using CFR+
read the original abstract
Counterfactual Regret Minimization and variants (e.g. Public Chance Sampling CFR and Pure CFR) have been known as the best approaches for creating approximate Nash equilibrium solutions for imperfect information games such as poker. This paper introduces CFR$^+$, a new algorithm that typically outperforms the previously known algorithms by an order of magnitude or more in terms of computation time while also potentially requiring less memory.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
On-line Learning in Tree MDPs by Treating Policies as Bandit Arms
Bandit algorithms can be adapted to Tree MDPs by treating policies as arms with shared-data confidence bounds, achieving polynomial memory and instance-dependent bounds on sample complexity and regret that depend on t...
-
NePPO: Near-Potential Policy Optimization for General-Sum Multi-Agent Reinforcement Learning
NePPO learns a player-independent potential function via a novel objective whose minimization yields an approximate Nash equilibrium for general-sum multi-agent games.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.