Social network extraction and analysis based on multimodal dyadic interaction

Publication date

2014-04-08T08:49:11Z

2014-04-08T08:49:11Z

2012-02-07

2014-04-08T08:49:11Z

Abstract

Social interactions are a very important component in people"s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times" Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links" weights are a measure of the"influence" a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.

Document Type

Article


Published version

Language

English

Publisher

MDPI Publishing

Related items

Reproducció del document publicat a: 10.3390/s120201702

Sensors, 2012, vol. 12, num. 2, p. 1702-1719

http://dx.doi.org/10.3390/s120201702

Recommended citation

This citation was generated automatically.

Rights

cc-by (c) Escalera Guerrero, Sergio et al., 2012

http://creativecommons.org/licenses/by/3.0/es

This item appears in the following Collection(s)