
The Library
What can you see? Identifying cues on internal states from the movements of natural social interactions
Tools
Bartlett, M., Edmunds, Charlotte, Belpaeme, Tony, Thill, Serge and Lemaignan, Severin (2019) What can you see? Identifying cues on internal states from the movements of natural social interactions. Frontiers in Robotics and AI, 6 . 49. doi:10.3389/frobt.2019.00049 ISSN 2296-9144.
|
PDF
WRAP-what-can-you-see-identifying-cues-internal-states-movements-natural-social-interactions-Edmunds-2019.pdf - Published Version - Requires a PDF viewer. Available under License Creative Commons Attribution 4.0. Download (789Kb) | Preview |
|
![]() |
PDF
WRAP-what-can-you-see-identifying-cues-internal-states-movements-natural-social-interactions-Edmunds-2019.pdf - Accepted Version Embargoed item. Restricted access to Repository staff only - Requires a PDF viewer. Download (221Kb) |
Official URL: https://doi.org/10.3389/frobt.2019.00049
Abstract
In recent years, the field of Human-Robot Interaction (HRI) has seen an increasing demand for technologies that can recognize and adapt to human behaviors and internal states (e.g. emotions and intentions). Psychological research suggests that human movements are important for inferring internal states. There is, however, a need to better understand what kind of information can be extracted from movement data, particularly in unconstrained, natural interactions. The present study examines which internal states and social constructs humans identify from movement in naturalistic social interactions. Participants either viewed clips of the full scene or processed versions of it displaying 2D positional data. Then, they were asked to fill out questionnaires assessing their social perception of the viewed material. We analyzed whether the full scene clips were more informative than the 2D positional data clips. First, we calculated the inter-rater agreement between participants in both conditions. Then, we employed machine learning classifiers to predict the internal states of the individuals in the videos based on the ratings obtained. Although we found a higher inter-rater agreement for full scenes compared to positional data, the level of agreement in the latter case was still above chance, thus demonstrating that the social constructs under study were identifiable in both conditions. A factor analysis run on participants’ responses showed that participants identified the constructs interaction imbalance, interaction valence and engagement regardless of video condition. The machine learning classifiers achieved a similar performance in both conditions, again supporting the idea that movement alone carries relevant information. Overall, our results suggest it is reasonable to expect a machine learning algorithm, and consequently a robot, to successfully decode and classify a range of social situations using low-dimensional data (such as the movements and poses of observed individuals) as input.
Item Type: | Journal Article | ||||||
---|---|---|---|---|---|---|---|
Divisions: | Faculty of Social Sciences > Warwick Business School > Behavioural Science Faculty of Social Sciences > Warwick Business School |
||||||
Journal or Publication Title: | Frontiers in Robotics and AI | ||||||
Publisher: | Frontiers Research Foundation | ||||||
ISSN: | 2296-9144 | ||||||
Official Date: | 26 June 2019 | ||||||
Dates: |
|
||||||
Volume: | 6 | ||||||
Article Number: | 49 | ||||||
DOI: | 10.3389/frobt.2019.00049 | ||||||
Status: | Peer Reviewed | ||||||
Publication Status: | Published | ||||||
Access rights to Published version: | Open Access (Creative Commons) | ||||||
Date of first compliant deposit: | 21 June 2019 | ||||||
Date of first compliant Open Access: | 26 June 2019 | ||||||
Related URLs: |
Request changes or add full text files to a record
Repository staff actions (login required)
![]() |
View Item |
Downloads
Downloads per month over past year