Back close

An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes

Publication Type : Journal Article

Publisher : Journal of Optimization Theory and Applications

Source : Journal of Optimization Theory and Applications, Volume 153, Number 3, p.688–708 (2012)

Url : http://dx.doi.org/10.1007/s10957-012-9989-5

Campus : Coimbatore

School : School of Engineering

Department : Computer Science

Year : 2012

Abstract : We develop an online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.

Cite this Research Publication : S. Bhatnagar and K., L., “An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes”, Journal of Optimization Theory and Applications, vol. 153, pp. 688–708, 2012.

Admissions Apply Now