Stochastic Control Problems with Probability and Risk-sensitive Criteria

dc.contributor.authorBhabak, Arnab
dc.date.accessioned2023-07-07T05:59:28Z
dc.date.accessioned2023-10-20T12:30:21Z
dc.date.available2023-07-07T05:59:28Z
dc.date.available2023-10-20T12:30:21Z
dc.date.issued2023
dc.descriptionSupervisor: Saha, Subhamayen_US
dc.description.abstractIn this thesis we consider stochastic control problems with probability and risk-sensitive criterion. We consider both single and multi controller problems. Under probability criterion we first consider a zero-sum game with semi-Markov state process. We consider a general state and finite action spaces. Under suitable assumptions, we establish the existence of value of the game and also characterize it through an optimality equation. In the process we also prescribe a saddle point equilibrium. Next we consider a zero-sum game with probability criterion for continuous time Markov chains. We consider denumerable state space and unbounded transition rates. Again under suitable assumptions, we show the existence of value of the game and also characterize it as the unique solution of a pair of Shapley equations. We also establish the existence of a randomized stationary saddle point equilibrium. In the risk-sensitive setup we consider a single controller problem with semi-Markov state process. The state space is assumed to be discrete. In place of the classical risk-sensitive utility function, which is the exponential function, we consider general utility functions. The optimization criteria also contains a discount factor. We investigate random finite horizon and infinite horizon problems. Using a state augmentation technique we characterize the value functions and also prescribe optimal controls. We then consider risk-sensitive game problems. We study zero and non-zero sum risk-sensitive average criterion games for semi-Markov processes with a finite state space. For the zero-sum case, under suitable assumptions we show that the game has a value. We also establish the existence of a stationary saddle point equilibrium. For the non-zero sum case, under suitable assumptions we establish the existence of a stationary Nash equilibrium. Finally, we also consider a partially observable model. More specifically, we investigate partially observable zero sum games where the state process is a discrete time Markov chain. We consider a general utility function in the optimization criterion. We show the existence of value for both finite and infinite horizon games and also establish the existence of optimal polices. The main step involves converting the partially observable game into a completely observable game which also keeps track of the total discounted accumulated reward/cost.en_US
dc.identifier.otherROLL NO.186123004
dc.identifier.urihttps://gyan.iitg.ac.in/handle/123456789/2416
dc.language.isoenen_US
dc.relation.ispartofseriesTH-3116;
dc.subjectSemi-Markov Processesen_US
dc.subjectProbability Criterionen_US
dc.subjectRisk-sensitiveen_US
dc.subjectStochastic Gamesen_US
dc.titleStochastic Control Problems with Probability and Risk-sensitive Criteriaen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
Abstract-TH-3116_186123004.pdf
Size:
78.83 KB
Format:
Adobe Portable Document Format
Description:
ABSTRACT
No Thumbnail Available
Name:
TH-3116_186123004.pdf
Size:
993.11 KB
Format:
Adobe Portable Document Format
Description:
THESIS
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: