pith. machine review for the scientific record. sign in

arxiv: 1805.00869 · v1 · submitted 2018-05-02 · 💻 cs.LG · math.OC· stat.ML

Recognition: unknown

Approximate Temporal Difference Learning is a Gradient Descent for Reversible Policies

Authors on Pith no claims yet
classification 💻 cs.LG math.OCstat.ML
keywords approximatefunctiongradientvaluedescentemphevenpolicy
0
0 comments X
read the original abstract

In reinforcement learning, temporal difference (TD) is the most direct algorithm to learn the value function of a policy. For large or infinite state spaces, exact representations of the value function are usually not available, and it must be approximated by a function in some parametric family. However, with \emph{nonlinear} parametric approximations (such as neural networks), TD is not guaranteed to converge to a good approximation of the true value function within the family, and is known to diverge even in relatively simple cases. TD lacks an interpretation as a stochastic gradient descent of an error between the true and approximate value functions, which would provide such guarantees. We prove that approximate TD is a gradient descent provided the current policy is \emph{reversible}. This holds even with nonlinear approximations. A policy with transition probabilities $P(s,s')$ between states is reversible if there exists a function $\mu$ over states such that $\frac{P(s,s')}{P(s',s)}=\frac{\mu(s')}{\mu(s)}$. In particular, every move can be undone with some probability. This condition is restrictive; it is satisfied, for instance, for a navigation problem in any unoriented graph. In this case, approximate TD is exactly a gradient descent of the \emph{Dirichlet norm}, the norm of the difference of \emph{gradients} between the true and approximate value functions. The Dirichlet norm also controls the bias of approximate policy gradient. These results hold even with no decay factor ($\gamma=1$) and do not rely on contractivity of the Bellman operator, thus proving stability of TD even with $\gamma=1$ for reversible policies.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Bridging the Gap Between Average and Discounted TD Learning

    cs.LG 2026-05 unverdicted novelty 6.0

    A new two-trajectory sampling algorithm for average-reward TD learning guarantees convergence with quadratic sample complexity and no explicit dimension dependence in both tabular and linear approximation settings.