Modeling Heart and Brain signals in the context of Wellbeing and Autism Applications

A Deep Learning Approach

16 January 2020
Versione stampabile

Time: h 16.00 
Venue: Polo Ferrari 1 - Via Sommarive 5, room Ofek 

PhD Candidate

  • Juan Manuel Mayor Torres

Abstract of Dissertation

The analysis and understanding of physiological and brain signals is critical in order to decode the user's behavioral/neural outcome measures in different domain scenarios. Personal Health-Care agents have been proposed recently in order to monitor and acquire reliable data from daily activities to enhance control participants’ wellbeing, and the quality of life of multiple non-neurotypical participants in clinical lab-controlled studies.

The inclusion of new wearable devices with increased and more compact memory requirements, and the possibility to include long-size datasets on the cloud and network-based applications agile the implementation of new improved computational health-care agents. These new enhanced agents are able to provide services including real time health-care, medical monitoring, and multiple biological outcome measures-based alarms for medical doctor diagnosis.

In this dissertation we will focus on multiple Signal Processing (SP), Machine Learning (ML), Saliency Relevance Maps (SRM) techniques and classifiers with the purpose to enhance the Personal Health-care agents in a multimodal clinical environment. Therefore, we propose the evaluation of current state-of-the-art methods to evaluate the incidence of successful hypertension detection, categorical and emotion stimuli decoding using biosignals. 

To evaluate the performance of ML, SP, and SRM techniques proposed in this study, we divide this thesis document in two main implementations: 1) Four different initial pipelines where we evaluate the SP, and ML methodologies included here for an enhanced a) Hypertension detection based on Blood-Volume-Pulse signal (BVP) and Photoplethysmography (PPG) wearable sensors, b) Heart-Rate (HR) and Inter-beat-interval (IBI) prediction using light adaptive filtering for physical exercise/real environments, c) Object Category stimuli decoding using EEG features and features subspace transformations, and d) Emotion recognition using EEG features from recognized datasets. And 2) A complete performance and robust SRM evaluation of a neural-based Emotion Decoding/Recognition pipeline using EEG features from Autism Spectrum Disorder (ASD) groups. This pipeline is presented as a novel assistive system for lab-controlled Face Emotion Recognition (FER) intervention ASD subjects. In this pipeline we include a Deep ConvNet as the Deep classifier to extract the correct neural information and decode emotions successfully. [at] ( ICT International Doctoral School )