pub:research

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
pub:research [2022/04/29 12:41] kkuttpub:research [2022/06/07 11:38] kkutt
Line 2: Line 2:
  
 ===== Papers ===== ===== Papers =====
 +
 +=== SciData2022 ===
 +  * K. Kutt, D. Drążyk, L. Żuchowska, M. Szelążek, S. Bobek, and G. J. Nalepa, "**BIRAFFE2, a multimodal dataset for emotion-based personalization in rich affective game environments**," Sci. Data, vol. 9, no. 1, p. 274, 2022
 +  * DOI: [[https://doi.org/10.1038/s41597-022-01402-6|10.1038/s41597-022-01402-6]]
 +  * [[https://doi.org/10.1038/s41597-022-01402-6|Full text available online]] 
 +  * ++Abstract | Generic emotion prediction models based on physiological data developed in the field of affective computing apparently are not robust enough. To improve their effectiveness, one needs to personalize them to specific individuals and incorporate broader contextual information. To address the lack of relevant datasets, we propose the 2nd Study in Bio-Reactions and Faces for Emotion-based Personalization for AI Systems (BIRAFFE2) dataset. In addition to the classical procedure in the stimulus-appraisal paradigm, it also contains data from an affective gaming session in which a range of contextual data was collected from the game environment. This is complemented by accelerometer, ECG and EDA signals, participants’ facial expression data, together with personality and game engagement questionnaires. The dataset was collected on 102 participants. Its potential usefulness is presented by validating the correctness of the contextual data and indicating the relationships between personality and participants’ emotions and between personality and physiological signals.++
 +
 +=== AfCAI2022 ===
 +  * K. Kutt, P. Sobczyk, and G. J. Nalepa, "**Evaluation of Selected APIs for Emotion Recognition from Facial Expressions**," in Bio-inspired Systems and Applications: from Robotics to Ambient Intelligence. IWINAC 2022 Proceedings, Part II, 2022, pp. 65–74.
 +  * {{ :pub:kkt2022afcai.pdf |Full text draft available}}
 +  * ++Abstract | Facial expressions convey the vast majority of the emotional information contained in social utterances. From the point of view of affective intelligent systems, it is therefore important to develop appropriate emotion recognition models based on facial images. As a result of the high interest of the research and industrial community in this problem, many ready-to-use tools are being developed, which can be used via suitable web APIs. In this paper, two of the most popular APIs were tested: Microsoft Face API and Kairos Emotion Analysis API. The evaluation was performed on images representing 8 emotions—anger, contempt, disgust, fear, joy, sadness, surprise and neutral—distributed in 4 benchmark datasets: Cohn-Kanade (CK), Extended Cohn-Kanade (CK+), Amsterdam Dynamic Facial Expression Set (ADFES) and Radboud Faces Database (RaFD). The results indicated a significant advantage of the Microsoft API in the accuracy of emotion recognition both in photos taken en face and at a 45∘ angle. Microsoft’s API also has an advantage in the larger number of recognised emotions: contempt and neutral are also included.++
  
 === MRC2021b === === MRC2021b ===
  • pub/research.txt
  • Last modified: 2024/01/25 12:26
  • by kkutt