pith. machine review for the scientific record. sign in

arxiv: 1704.05147 · v2 · submitted 2017-04-17 · 💻 cs.LG · stat.ML

Recognition: unknown

O²TD: (Near)-Optimal Off-Policy TD Learning

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords off-policylearningdifferencefunctionoptimaltemporaltruevalue
0
0 comments X
read the original abstract

Temporal difference learning and Residual Gradient methods are the most widely used temporal difference based learning algorithms; however, it has been shown that none of their objective functions is optimal w.r.t approximating the true value function $V$. Two novel algorithms are proposed to approximate the true value function $V$. This paper makes the following contributions: (1) A batch algorithm that can help find the approximate optimal off-policy prediction of the true value function $V$. (2) A linear computational cost (per step) near-optimal algorithm that can learn from a collection of off-policy samples. (3) A new perspective of the emphatic temporal difference learning which bridges the gap between off-policy optimality and off-policy stability.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.