Department of Mathematics
 Search | Help | Login | pdf version | printable version

Math @ Duke



Publications [#243797] of Mauro Maggioni

Papers Published

  1. Mahadevan, S; Maggioni, M; Ferguson, K; Osentoski, S, Learning representation and control in continuous Markov decision processes, Proceedings of the National Conference on Artificial Intelligence, vol. 2 (November, 2006), pp. 1194-1199
    (last updated on 2019/02/16)

    This paper presents a novel framework for simultaneously learning representation and control in continuous Markov decision processes. Our approach builds on the framework of proto-value functions, in which the underlying representation or basis functions are automatically derived from a spectral analysis of the state space manifold. The proto-value functions correspond to the eigenfunctions of the graph Laplacian. We describe an approach to extend the eigenfunctions to novel states using the Nyström extension. A least-squares policy iteration method is used to learn the control policy, where the underlying subspace for approximating the value function is spanned by the learned proto-value functions. A detailed set of experiments is presented using classic benchmark tasks, including the inverted pendulum and the mountain car, showing the sensitivity in performance to various parameters, and including comparisons with a parametric radial basis function method. Copyright © 2006, American Association for Artificial Intelligence ( All rights reserved.
ph: 919.660.2800
fax: 919.660.2821

Mathematics Department
Duke University, Box 90320
Durham, NC 27708-0320