About this Event
3620 South Vermont Avenue, Los Angeles, CA 90089
Yumin Zhang, Auburn University
Join Zoom Meeting:
Meeting ID: 949 7361 9069
Abstract: We study the convergence of policy iteration for (deterministic) optimal control problems. To overcome the problem of ill-posedness due to lack of regularity, we consider both discrete and semi-discrete schemes by adding a viscosity term via finite differences in space. We prove that PI for the schemes converges exponentially fast, and provide a bound on the error induced by the schemes. If time permits, I will also discuss the convergence of exploratory Hamilton--Jacobi--Bellman (HJB) equations arising from the entropy-regularized exploratory control problem. These are joint works with Wenpin Tang, Hung Tran and Xunyu Zhou.