Department of Mathematics
 Search | Help | Login | pdf version | printable version

Math @ Duke





.......................

.......................


Publications [#258516] of Sayan Mukherjee

Papers Published

  1. Peshkin, L; Mukherjee, S, Bounds on sample size for policy evaluation in Markov environments, Lecture notes in computer science, vol. 2111 (January, 2001), pp. 616-629, ISSN 0302-9743
    (last updated on 2017/04/01)

    Abstract:
    © Springer-Verlag Berlin Heidelberg 2001.Reinforcement learning means finding the optimal course of action in Markovian environments without knowledge of the environment’s dynamics. Stochastic optimization algorithms used in the field rely on estimates of the value of a policy. Typically, the value of a policy is estimated from results of simulating that very policy in the environment. This approach requires a large amount of simulation as different points in the policy space are considered. In this paper, we develop value estimators that utilize data gathered when using one policy to estimate the value of using another policy, resulting in much more data-efficient algorithms. We consider the question of accumulating a sufficient experience and give PAC-style bounds.

 

dept@math.duke.edu
ph: 919.660.2800
fax: 919.660.2821

Mathematics Department
Duke University, Box 90320
Durham, NC 27708-0320