Proposition de sujet de thèse Computational Model of Affective and Non-Verbal Behaviors Generation for Virtual Patient
Computational Model of Affective and Non-Verbal Behaviors Generation for Virtual Patient
Date de début : Novembre 2018
Durée : 36 mois
LIMSI-CNRS (Orsay) www.limsi.fr (RER B from Paris)
Jean-Claude MARTIN : email@example.com
Professeur en Informatique - Université Paris Sud – LIMSI
Send to MARTIN@LIMSI.FR
CV + letter of motivation + grades from the last 3 years + reports on previous internships / projects
Non-verbal signals displayed by medics have a strong impact on patients. Virtual Patients are virtual characters that can be used for training medics to interact with human patients. Current Virtual Patients are limited with several respects. They are not applied to dementia patients. The dynamics of the interactivity and consideration of non-verbal behaviors displayed by the trainee and the virtual character is limited. The scenarios are either manually driven by an experimenter, or are limited to selection of canned animations without any dynamic modeling of a dementia patient’s pathological appraisal component nor any probabilistic behavioral model rooted on field data. This emped the credibility of the training.
The goal of this PhD thesis is to design a model of pathological emotional appraisal of the situation that can be used to simulate patient’s behaviors and thus train medics on the proper non-verbal behaviors they should display.
A dynamic component based on cognitive theories of emotions will be modeled based on field data and interviews with medical experts. This component will be limited to a limited set of criteria that are used for appraising the current situation. The Virtual Patient will simulate evaluation of these criteria by considering contextual information (eg. systematic negative evaluation of unexpected events). The model will consider the moderating effects that the nonverbal behavior of the user can have on these evaluations of the situation by the patient. The evaluation of the expectancy of the current situation will be considered since unexpected events can induce violent behaviors in dementia patients. This simple cognitive and emotional component will enable more realistic reactions of the virtual patient and a more dynamic selection of non-verbal behavior and the related animations. Such data is needed to generate explanations for the formative feedback to be provided to the learner.
The state of the art shows that modeling an interactive virtual patient is a very complex task. For this reason, the project will
1) focus on a limited set of relevant cognitive component identified with the medical partners and inspired from the collected field data,
2) be framed on focused relevant non-verbal behaviors (there will not be any speech recognition of the learner’s utterance ; the virtual patient will use recorded speech to ensure audio quality and a realistic setting),
and 3) there will not be any automatic generation of dialog but rather a dynamic selection of paths in narrative trees (the selection of the path to follow in the narrative branchant tree being based on a model of limited identified cognitive components).
The sequential evaluation of these criteria will be designed so that their sequential combination (possibly with the value of some criteria being unspecified for some situations). For example the user is telling the patient that the nurse is going to wash him for his morning toilette. This is generally and will be evaluated as being non pleasant for Alzheimer patients.
State of the art (non-verbal behaviors, virtual patients, computational psychiatry, …)
Specifications of the virtual patient
Computational model of psychological symptoms of the patient with dementia
Computational model of multimodal behavioral symptoms
Development of computational models of behavioral and clinical features of patients with
dementia using real-life cases and field observations. This task aims at designing interactive
expressive virtual agents thanks to field studies and collected multimodal corpora. Adapt and extend its previous work on multimodal annotation schemes for behaviors, emotions and emotional events in narratives (MARC platform (http://www.marc-toolkit.net/) for animating virtual agents with a cognitive model of affect) to the specific needs of VPs.
Adaptation of the cognitive model of emotion of the Component Process Model to dementia.
Subtle facial expressions specification and animations for expressing dementia related affect and cognitive states
Generated animations of multimodal affective behaviors of the patients will be assessed by specialists in terms of credibility.
Integration with partners deliverables
The thesis is funded by the project ANR VIRTUALZ.
The salary is approx 1768 Euros per month (salaire brut) minus taxes.
Master in Computer Science
Master in Cognitive Science
Interests in Artificial Intelligence / Human-Computer Interaction / Psychology / Character Animation
Basic knowledge of UNITY and programming (C# / Java) is a plus