Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
pub:research [2021/11/16 17:32] – MRC 2021 papers kkutt | pub:research [2024/01/25 12:26] (current) – kkutt | ||
---|---|---|---|
Line 2: | Line 2: | ||
===== Papers ===== | ===== Papers ===== | ||
+ | |||
+ | === DSAA2023 === | ||
+ | * K. Kutt, Ł. Ściga, and G. J. Nalepa, " | ||
+ | * DOI: [[https:// | ||
+ | * ++Abstract | Current review papers in the area of Affective Computing and Affective Gaming point to a number of issues with using their methods in out-of-the-lab scenarios, making them virtually impossible to be deployed. On the contrary, we present a game that serves as a proof-of-concept designed to demonstrate that—being aware of all the limitations and addressing them accordingly—it is possible to create a product that works in-the-wild. A key contribution is the development of a dynamic game adaptation algorithm based on the real-time analysis of emotions from facial expressions. The obtained results are promising, indicating the success in delivering a good game experience.++ | ||
+ | |||
+ | === InfFusion2023 === | ||
+ | * J. M. Górriz //et al.//, " | ||
+ | * DOI: [[https:// | ||
+ | * [[https:// | ||
+ | * ++Abstract | Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, | ||
+ | |||
+ | === SciData2022 === | ||
+ | * K. Kutt, D. Drążyk, L. Żuchowska, M. Szelążek, S. Bobek, and G. J. Nalepa, " | ||
+ | * DOI: [[https:// | ||
+ | * [[https:// | ||
+ | * ++Abstract | Generic emotion prediction models based on physiological data developed in the field of affective computing apparently are not robust enough. To improve their effectiveness, | ||
+ | |||
+ | === AfCAI2022 === | ||
+ | * K. Kutt, P. Sobczyk, and G. J. Nalepa, " | ||
+ | * {{ : | ||
+ | * ++Abstract | Facial expressions convey the vast majority of the emotional information contained in social utterances. From the point of view of affective intelligent systems, it is therefore important to develop appropriate emotion recognition models based on facial images. As a result of the high interest of the research and industrial community in this problem, many ready-to-use tools are being developed, which can be used via suitable web APIs. In this paper, two of the most popular APIs were tested: Microsoft Face API and Kairos Emotion Analysis API. The evaluation was performed on images representing 8 emotions—anger, | ||
=== MRC2021b === | === MRC2021b === | ||
Line 139: | Line 161: | ||
* ++Abstract | We are aiming at developing a technology to detect, identify and interpret human emotional states. We believe, that it can be provided based on the integration of context-aware systems and affective computing paradigms. We are planning to identify and characterize affective context data, and provide knowledge-based models to identify and interpret affects based on this data. A working name for this technology is simply AfCAI: Affective Computing with Context Awareness for Ambient Intelligence.++ | * ++Abstract | We are aiming at developing a technology to detect, identify and interpret human emotional states. We believe, that it can be provided based on the integration of context-aware systems and affective computing paradigms. We are planning to identify and characterize affective context data, and provide knowledge-based models to identify and interpret affects based on this data. A working name for this technology is simply AfCAI: Affective Computing with Context Awareness for Ambient Intelligence.++ | ||
+ | ===== Projects ===== | ||
+ | |||
+ | * **Personality, | ||
===== Tools and Datasets ===== | ===== Tools and Datasets ===== |