Math @ Duke

Publications [#258516] of Sayan Mukherjee
Papers Published
 Peshkin, L; Mukherjee, S, Bounds on sample size for policy evaluation in Markov environments,
Lecture notes in computer science, vol. 2111
(January, 2001),
pp. 616629, ISSN 03029743
(last updated on 2017/04/01)
Abstract: Â© SpringerVerlag Berlin Heidelberg 2001.Reinforcement learning means finding the optimal course of action in Markovian environments without knowledge of the environmentâ€™s dynamics. Stochastic optimization algorithms used in the field rely on estimates of the value of a policy. Typically, the value of a policy is estimated from results of simulating that very policy in the environment. This approach requires a large amount of simulation as different points in the policy space are considered. In this paper, we develop value estimators that utilize data gathered when using one policy to estimate the value of using another policy, resulting in much more dataefficient algorithms. We consider the question of accumulating a sufficient experience and give PACstyle bounds.


dept@math.duke.edu
ph: 919.660.2800
fax: 919.660.2821
 
Mathematics Department
Duke University, Box 90320
Durham, NC 277080320

