Multimodal Affect Recognition and Classification

An exciting a novel area in the creation of artificial intelligence is in the development of affective computing systems – with the aim being to bestow machines with the ability to recognise, interpret, and simulate human emotion, creating artificial emotional intelligence.

In terms of applications, affective computing paints a tantalising picture: accurate methods for emotion recognition and simulation could revolutionise a wide variety of fields. Research in this area is fuelled by a combination of advances in the psychological and physiological study of emotion, in parallel with emerging bio-sensing technology. The increase in ubiquitous computing platforms and sensors provided by internet-of-things means that novel multi-sensor data fusion techniques are required to interpret and maximise the use of this data.  Machine learning and computer vision communities have turned their attention towards affective computing and are working towards using the current state of the art: deep learning.

Is it possible to create accurate but lightweight deep learning architectures that can be used with multi-source data on devices with restrictive storage and power capabilities, such as a mobile phone?

This goal of this project is to design, implement, test and deploy an accurate and lightweight system for emotion recognition using a small dataset from in-built mobile-phone sensors.

This project is funded by Huawei Technologies Co Ltd and runs from 1/11/2019 – 31/10/2020