|
| Publications [#347600] of Jianfeng Lu
Papers Published
- Agazzi, A; Lu, J, Temporal-difference learning with nonlinear function approximation: lazy
training and mean field regimes,
PMLR, vol. 145
(May, 2019),
pp. 37-74
(last updated on 2026/01/14)
Abstract: We discuss the approximation of the value function for infinite-horizon
discounted Markov Reward Processes (MRP) with nonlinear functions trained with
the Temporal-Difference (TD) learning algorithm. We first consider this problem
under a certain scaling of the approximating function, leading to a regime
called lazy training. In this regime, the parameters of the model vary only
slightly during the learning process, a feature that has recently been observed
in the training of neural networks, where the scaling we study arises
naturally, implicit in the initialization of their parameters. Both in the
under- and over-parametrized frameworks, we prove exponential convergence to
local, respectively global minimizers of the above algorithm in the lazy
training regime. We then compare this scaling of the parameters to the
mean-field regime, where the approximately linear behavior of the model is
lost. Under this alternative scaling we prove that all fixed points of the
dynamics in parameter space are global minimizers. We finally give examples of
our convergence results in the case of models that diverge if trained with
non-lazy TD learning, and in the case of neural networks.
|