Recognition: unknown
Reinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU
read the original abstract
We introduce a hybrid CPU/GPU version of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on aspects critical to leveraging the GPU's computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU/GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation; we make it publicly available to other researchers at https://github.com/NVlabs/GA3C .
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
ERPPO: Entropy Regularization-based Proximal Policy Optimization
ERPPO adds a DSA-based ambiguity estimator to MAPPO and switches between L1 and L2 entropy regularization to improve exploration and stability in non-stationary multi-dimensional observations.
-
Biologically Inspired Event-Based Perception and Sample-Efficient Learning for High-Speed Table Tennis Robots
Event-based perception combined with progressive low-to-high speed training improves robotic table tennis return accuracy by 35.8% using the same number of training episodes.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.