The empirical analysis of film style II

Following on from an earlier post that provided links to papers on the empirical analysis of film style and related issues (here), this week we have some links on the same subject. As before the links will take you directly to the paper or the repository where the work is held. The version linked to may be a draft or a pre-/post-print of the work in question, and this should be kept in mind.

Adams B, Dorai C, and Venkatesh S 2002 Toward automatic extraction of expressive elements from motion pictures: tempo, IEEE Transactions on Multimedia 4 (4): 472-481. [Abstract only].

Jacobs R 2005 Influence of Shot Length and Camera Movement on Depth Perception in 3DTV, unpublished Masters Thesis, TUE.

Kelley M (n.d.) Pacing in television newscasts: does target audience make a difference?

Rasheed Z and Shah M 2001 Scene detection in Hollywood movies and TV shows, The Eighth IEEE International Conference on Computer Vision, 9-12 July 2001, Vancouver, Canada.

Rasheed Z, Sheikh Y, and Shah M 2003 On the use of computable features for film classification, IEEE Transactions on Circuit Systems for Video Technology 15 (1): 52-64.

Abstract

This paper presents a framework for the classification of feature films into genres, based only on computable visual cues. We view the work as a step toward high-level semantic film interpretation, currently using low-level video features and knowledge of ubiquitous cinematic practices. Our current domain of study is the movie preview, commercial advertisements primarily created to attract audiences. A preview often emphasizes the theme of a film and hence provides suitable information for classification. In our approach, we classify movies into four broad categories: Comedies, Action, Dramas, or Horror films. Inspired by cinematic principles, four computable video features (average shot length, color variance, motion content and lighting key) are combined in a framework to provide a mapping to these four high level semantic classes. Mean shift classification is used to discover the structure between the computed features and each film genre. We have conducted extensive experiments on over a hundred film previews and notably demonstrate that low-level visual features (without the use of audio or text cues) may be utilized for movie classification. Our approach can also be broadened for many potential applications including scene understanding, the building and updating of video databases with minimal human intervention, browsing, and retrieval of videos on the Internet (video-on-demand) and video libraries.

Ren R 2008 Audio-visual Football Video Analysis: From Structure Detection to Attention Analysis, unpublished Ph.D. Thesis, University of Glasgow. [A version of this thesis was published in 2008 by VDM Verlag].

Sundaram H and Shih-Fu Chang 2002 Computable scenes and structures in films, IEEE Transactions on Multimedia 4 (4): 482-491. [Abstract only].

Troung BT 2004 In Search of Structural and Expressive Elements in Film Based on Visual Grammar, unpublished Ph.D. Thesis, Curtain University of Technology.

Hee Lin Wang and Loong-Fah Cheong 2006 Affective understanding in film, IEEE Transactions on Circuit Systems for Video Technology 16 (6): 689-704.

Abstract

Affective understanding of film plays an important role in sophisticated movie analysis, ranking and indexing. However, due to the seemingly inscrutable nature of emotions and the broad affective gap from low-level features, this problem is seldom addressed. In this paper, we develop a systematic approach grounded upon psychology and cinematography to address several important issues in affective understanding. An appropriate set of affective categories are identified and steps for their classification developed. A number of effective audiovisual cues are formulated to help bridge the affective gap. In particular, a holistic method of extracting affective information from the multifaceted audio stream has been introduced. Besides classifying every scene in Hollywood domain movies probabilistically into the affective categories, some exciting applications are demonstrated. The experimental results validate the proposed approach and the efficacy of the audiovisual cues.

Wei C-Y, Dimitrova N, and Chang S-F 2004 Color-mood analysis of films based on syntactic and psychological models, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, June 2004.

Abstract

The emergence of peer-to-peer networking and the increase of home PC storage capacity are necessitating efficient scaleable methods for video clustering, recommending and browsing. Based on film theories and psychological models, color-mood is an important factor affecting user emotional preferences. We propose a compact set of features for color-mood analysis and subgenre discrimination. We introduce two color representations for scenes and full films in order to extract the essential moods from the films: a global measure for the color palette and a discriminative measure for the transitions of the moods in the movie. We captured the dominant color ratio and the pace of the movie. Despite the simplicity and efficiency of the features, the classification accuracy was surprisingly good, about 80%, possibly thanks to the prevalence of the color-mood association in feature films.

Yeun J and Matsushita Y 2008 Statistical analysis of global motion chains, in Proceedings of the 10th European Conference on Computer Vision: Part II. Berlin: Springer: 692-705.

Abstract

Multiple elements such as lighting, colors, dialogue, and camera motion contribute to the style of a movie. Among them, camera motion is commonly overlooked yet a crucial point. For instance, documentaries tend to use long smooth pans whereas action movies usually have short and dynamic movements. This information, also referred to as global motion, could be leveraged by various applications in video clustering, stabilization, and editing. We perform analyses to study the in-class characteristics of these motions as well as their relationship with motions of other movie types. In particular, we model global motion as a multi-scale distribution of transformation matrices from frame to frame. Secondly, we quantify the difference between pairs of videos using the KL-divergence of these distributions. Finally, we demonstrate an application modeling and clustering commercial and amateur videos. Experiments performed show advantage compared to the usage of some local motion-based approaches.

Advertisements

About Nick Redfern

I graduated from the University of Kent in 1998 with a degree in Film Studies and History, and was awarded an MA by the same institution in 2002. I received my Ph.D. from Manchester Metropolitan University in 2006 for a thesis title 'Regionalism and the Cinema in the United Kingdom, 1992 to 2002.' I have taught at Manchester Metropolitan University and the University of Central Lancashire. My research interests include regional film cultures and industries in the United Kingdom; cognition and communication in the cinema; anxiety in contemporary Hollywood cinema; cinemetrics; and film style and film form. My work has been published in Entertext, the International Journal of Regional and Local Studies, the New Review of Film and Television Studies, Cyfrwng: Media Wales Journal, and the Journal of British Cinema and Television.

Posted on April 8, 2010, in Cinemetrics, Film Analysis, Film Studies, Film Style, Film Theory. Bookmark the permalink. 2 Comments.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: