Watching you watching films

Other posts on this blog have listed links to papers addressing the indexing of video by analysis of the stylistic components of the video itself (shot lengths, colour, sound energy, etc). An alternative approach is not to look at the video but to look at the viewer watching the video or to look at the viewer’s brain whilst they watch. The papers presented adopt a range of approaches to understanding films by understanding viewers and to understanding viewers by understanding films. They are an example of the very interesting empirical research taking place across diverse subject areas that has yet to make any impact on film studies.

The linked-to version may not be the final published version.

Calcanis C, Callaghan V, Gardner M, and Walker M 2008 Towards end-user physiological profiling for video recommendation engines, 4th International Conference on Intelligent Environments, 21-22 July 2008, Seattle, USA.

This paper describes research aimed at creating intelligent video recommendation engines for broadband media services in digital homes. The aim of our research is to harness physiological signals to characterise people’s video selection preferences which we plan to integrate into new generations of video recommendation engines. We describe an initial experiment aimed at determining whether videos produce useable variations in physiology and linking these with emotional changes elicited by video material. We discuss our results and consider the possibility of utilising physiological sensing methods to build profiles that can be treated as signatures. Finally, we conclude by describing the future directions of our work.

Canini L, Gilroy S, Cavazza M, Leonardi R, and Benini S 2010 Users’ response to affective film content: a narrative perspective, 8th International Workshop on Content-based Multimedia Indexing, 23-25 June, 2010, Grenoble, France.

In this paper, we take a human-centred view to the definition of the affective content of films. We investigate the relationship between users physiological response and multimedia features extracted from the movies, from the perspective of narrative evolution rather than by measuring average values. We found a certain dynamic correlation between arousal, derived from measures of Galvanic Skin Resistance during film viewing, and specific multimedia features in both sound and video domains. Dynamic physiological measurements were also consistent with post-experiment self-assessment by the subjects. These findings suggest that narrative aspects (including staging) are central to the understanding of video affective content, and that direct mapping of video features to emotional models taken from psychology may not capture these phenomena in a straightforward manner.

Cooray SH, Hyowon L, and O’Connor NE 2010 A user-centric system for home movie summarisation, 17th International Conference on Multimedia Modeling, 5-7 January 2011, Taipei, Taiwan.

In this paper we present a user-centric summarisation system that combines automatic visual-content analysis with user-interface design features as a practical method for home movie summarisation. The proposed summarisation system is designed in such a manner that the video segmentation results generated by the automatic content analysis tools are further subject to refinement through the use of an intuitive user-interface so that the automatically created summaries can be effectively tailored to each individual’s personal need. To this end, we study a number of content analysis techniques to facilitate the efficient computation of video summaries, and more specifically emphasise the need for employing an efficient and robust optical flow field computation method for sub-shot segmentation in home movies. Due to the subjectivity of video summarisation and the inherent challenges associated with automatic content analysis, we propose novel user-interface design features as a means to enable the creation of meaningful home movie summaries in a simple manner. The main features of the proposed summarisation system include the ability to automatically create summaries of different visual comprehension, interactively defining the target length of the desired summary, easy and interactive viewing of the content in terms of a storyboard, and manual refinement of the boundaries of the automatically selected video segments in the summary.

Joho H, Jose JM, Valenti R, and Sebe N 2009 Exploiting facial expressions for affective video summarisation, ACM International Conference on Image and Video Retrieval, 8-10 July, 2009, Santorini, Greece.

This paper presents an approach to affective video summarisation based on the facial expressions (FX) of viewers. A facial expression recognition system was deployed to capture a viewer’s face and his/her expressions. The user’s facial expressions were analysed to infer personalised affective scenes from videos. We proposed two models, pronounced level and expression’s change rate, to generate affective summaries using the FX data. Our result suggested that FX can be a promising source to exploit for affective video summaries that can be tailored to individual preferences.

Joho H, Staiano J, Sebe N, and Jose JM 2011 Looking at the viewer: analysing facial activities to detect personal highlights of multimedia contents, Multimedia Tools and Applications 51 (2): 505-523.

This paper presents an approach to detect personal highlights in videos based on the analysis of facial activities of the viewer. Our facial activity analysis was based on the motion vectors tracked on twelve key points in the human face. In our approach, the magnitude of the motion vectors represented a degree of a viewer’s affective reaction to video contents. We examined 80 facial activity videos recorded for ten participants, each watching eight video clips in various genres. The experimental results suggest that useful motion vectors to detect personal highlights varied significantly across viewers. However, it was suggested that the activity in the upper part of face tended to be more indicative of personal highlights than the activity in the lower part.

Peng W-T, Huang W-J, Chu W-T, Chou C-N, Chang W-Y, and Chang C-H, and Hung T-P 2009 A user experience model for home video summarization, 15th International Multimedia Modeling Conference on Advances in Multimedia Modelin, 7-9 January 2009, Sophia-Antipolis, France.

n this paper, we propose a novel system for automatically summarizing home videos based on a user experience model. The user experience model takes account of user’s spontaneous behaviors when viewing videos. Based on users’ reaction when viewing videos, we can construct a systematic framework to automate video summarization. In this work, we analyze the variations of viewer’s eye movement and facial expression when he or she watching the raw home video. We transform these behaviors into the clues of determining the important part of each video shot. With the aids of music analysis, the developed system automatically generates a music video (MV) style summarized home videos. Experiments show that this new type of editing mechanism can effectively generate home video summaries and can largely reduce the efforts of manual summarization.

Wang S and Hu Y 2010 Affective video analysis by using users’ EEG and subjective evaluation, International Conference on Kansei Engineering and Emotion Research, 2-4 March 2010, Paris.

This paper describes a research project conducted to study the relationship between videos and users’ induced physiological and psychological responses. Firstly, a set of 43 film clips are carefully chosen, and 20 subjects are invited to participate in our experiment. They watch several of chosen clips while their EEG signals are recorded synchronously. After each clip, the subject is required to report his real induced emotion using emotional valence, arousal, basic emotion category and intensity. Secondly, several classical movie features and EEG features are extracted, and feature selections are conducted by computing the correlation between each feature and the arousal or valence. Thirdly, selected movie features and EEG features are used to simulate the arousal and valence respectively by employing the linear relevance vector machine. Fourthly, selected movie features are used to simulate the EEG feature values, and vice verse. The results show that arousal/valence can be well estimated by either video features or EEG features. Apart from that, they also indicate that there exist certain relationship between the videos and induced EEG signals, and some relation models are acquired. Finally, clustering is conducted to map the emotion dimensions to emotion categories. Thus, the gap between videos and emotion categories, as well as the gap between the EEG and emotion categories, has been bridged to some extent. This result could provide a reference to applications in brain-computer interaction field.

Advertisements

About Nick Redfern

I graduated from the University of Kent in 1998 with a degree in Film Studies and History, and was awarded an MA by the same institution in 2002. I received my Ph.D. from Manchester Metropolitan University in 2006 for a thesis title 'Regionalism and the Cinema in the United Kingdom, 1992 to 2002.' I have taught at Manchester Metropolitan University and the University of Central Lancashire. My research interests include regional film cultures and industries in the United Kingdom; cognition and communication in the cinema; anxiety in contemporary Hollywood cinema; cinemetrics; and film style and film form. My work has been published in Entertext, the International Journal of Regional and Local Studies, the New Review of Film and Television Studies, Cyfrwng: Media Wales Journal, and the Journal of British Cinema and Television.

Posted on September 1, 2011, in Cinemetrics, Cognitive Film Theory, Emotion, Film Analysis, Film Studies and tagged , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: