Topic | People | Description |
---|---|---|
MSC 49L20 - Dynamic programming method |
Athena Picarelli
|
An optimal control problem is defined starting from a dynamics, a set of controls acting on this dynamics and a cost (or gain), functional of the control and the associated dynamics. The objective is to minimize (or maximize) this cost (or gain). The value function (function of the initial time and position) is defined as the optimal value, i.e. the minimum (or maximum) value of the cost (or gain), associated to the problem. I study optimal control problems for which the dynamics is given by a stochastic differential equation. By the dynamic programming approach the value function can be characterized as the solution (in the weak viscosity sense) of a partial differential equation called the Hamilton-Jacobi-Bellman equation. |
via Cantarane, 24
37129 Verona
VAT number01541040232
Italian Fiscal Code93009870234
© 2023 | Verona University
CSS e script comuni siti DOL - frase 9957