Publication Type:

Journal Article

Source:

Journal of Optimization Theory and Applications, Volume 153, Number 3, p.688–708 (2012)

URL:

http://dx.doi.org/10.1007/s10957-012-9989-5

Abstract:

We develop an online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.

Cite this Research Publication

S. Bhatnagar and K., L., “An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes”, Journal of Optimization Theory and Applications, vol. 153, pp. 688–708, 2012.

207
PROGRAMS
OFFERED
5
AMRITA
CAMPUSES
15
CONSTITUENT
SCHOOLS
A
GRADE BY
NAAC, MHRD
8th
RANK(INDIA):
NIRF 2018
150+
INTERNATIONAL
PARTNERS