Collecting data for multimodal biometric data for insights and affective computing: University lecture
The evergreen question in affective computing is: how do we build the perfect, robust yet accurate and actionable emotional AI model?
To top it up, let's make sure the data is scalable, affordable and easy to use for the end respondent.
Surprisingly, the answer exists and it's here already, but as the saying goes: the future is here, just isn’t evenly distributed.
The short answer would be :by combining multimodal physical and physiological data with self report questions - but there is much more to the model than meets the eye.
The full answer is a 45 miniature long lecture held for the students taking the “Introduction to Affective computing in eBusiness” class, held at the Faculty of Business and Economics - an institution that holds three international accreditations, EQUIS and AACSB Business accreditation at the institution level, and EFMD Programme Accreditation.
Multimodal: physical, physiological and self assessment for the win
The summary of our lecture would be that the 3 main ways in which we can collect data for affective computing are:
• Physical (E.g.: speech, gestures, posture, ...)
• Physiological (E.g.: Skin temperature, Galvanic skin response, ...)
• Questionnaires (E.g.: PANAS, ESM, ...)
What are the most common tools, vendors and technologies for each of the 3 methods?
For Physical the most common are anything that can be collected via audio and video files, which is also what we collect via our product called Lite.
Facial coding e.g. Affectiva
Speech sentiment analysis e.g. Vokaturi
Physiological is anything that requires contact with a biosensor, most commonly a sleek wearable containing a MEMS sensor, which is also what we collect via our product called Pro.
Neurofeedback measurement with EEG e.g., EPOC
Heart rate sensors e.g., apple watch, polar h9
Sweat sensors (EDA/GSR) e.g., BioNomadix or Shimmer
Questionnaires are self explanatory, but specialised ones (available in the Tacit platform) are needed for affective computing and insights experiments such as:
ESM questionnaires → assess sampled experiences and behaviours as well as moment-to-moment changes in mental states, embedded in daily life
SAM questionnaires → short self-reporting questionnaire designed to record a participant's given pleasure, arousal, and dominance, using pictures to convey the scale
In case you want to know more about vendors, make sure you check out our blog post containing a database of relevant providers in the space.
What are the main advantages and disadvantages for each method?
Physical- Advantages:
Widely applicable as there mostly is no special hardware (only specific software and a computer with camera) needed → scalability of sample size!
Physical- Disadvantages:
It is challenging to catch and interpret motion data accurately and correctly. E.g. Everyone's smile is different and for some people a very little smile might mean a more intense happy emotion than a big one for others. Or crying: sometimes people cry because they are sad and sometimes because they are overwhelmed and happy → hard to interpret
Gender and ethnicity bias → software often works better/best on white adult males than e.g., on people of colour or women
People are always able to fake emotions and express them physically in another way than they are actually feeling
Physiological - Advantages:
Very accurate as it is mostly involuntary activated feedback and people
cannot easily control their bodies responses
Physiological - Disadvantages:
Costly to measure as hardware is (almost all the time) necessary
Therefore, it is also not easy to scale to a big sample size
Development of accurate biometric measure wearables is challenging
Questionnaires- Advantages:
Easy and cheap to use and scale to a big sample size
Great freedom and many possibilities in content and design
Questionnaires- Disadvantages:
People may don’t want to tell the truth / lie on purpose
People may have distorted self-perceptions
Enjoy the lecture and please note you can find the slides on this link.