Category Archives: Film Analysis

Using the ECDF to analyse film style

Last month I looked at using kernel densities to analyse film style, and to follow-up this week’s post will focus on another simple graphical method for understanding film style: the empirical cumulative distribution function (ECDF).

Although it has a grand sounding name this is a very simple method for getting a lot of information very quickly. Most statistical software packages will calculate the ECDF for you and draw you a graph, but it is very simple to create an EXCEL or CALC spreadsheet to do this since it does not require any special knowledge.

The ECDF gives a complete description of a data set, and  is simply the fraction of a data set less than or equal to some specified value. Several plotting positions for the ECDF have been suggested, but here we use the simplest method:

which means that you count the number of shots (x) less than or equal to some value (X), and then divide by the sample size (N). Do this for every value of x in your data set and you have the ECDF. We can interpret this fraction in several ways: we can think of it as the probability of randomly selecting an x less than or equal to X (P[x ≤ X]); or we can think of it as the proportion of values less than or equal to X; or, if we multiply by 100, the percentage of values in a data set less than or equal to X.

For example, using the data set for Easy Virtue (1928) from the Cinemetrics database available here we can calculate the ECDF as illustrated in Table 1.

Table 1 Calculating the ECDF for Easy Virtue (1928) (N = 706)

To start, look at the value of X in the first column and then count the number of shots in the film with length less than or equal to that value. The first value is 0.9 but there are no shots this short in the film and so the frequency is zero. Divide this zero by the number of shots in the film (i.e. 706) and you have the ECDF when X = 0.9, which is 0 (because 0 divided by any number is always 0). Next, X = 1.0 seconds and there is 1 shot less than or equal to this value and so the ECDF at X = 1.0 is 1/706 = 0.0014. Turning to X = 1.1 we see there are three shots that are 1.1 seconds long AND there is one shot that is shorter in length (i.e. the one at 1.0s), and so the ECDF at X = 1.1 is 4/706 = 0.0057. This is equal to the frequency of 1.0 second long shots divided by N (0.0014) PLUS the frequency of shots that are 1.1 seconds long (3/706 = 0.0042) – and that is why it’s called the cumulative distribution function. From this point you keep going until to reach the end: the longest shot in the film is given as 66.6 seconds long and so all 706 shots must be less than or equal to 66.6 seconds and so at this value of X the ECDF = 706/706 = 1.0. The ECDF is 1.0 for any value of X greater than the maximum x in the data set.

It really is this easy. And you can get a simple graph of F(x) by plotting x on the x-axis and the ECDF on the y-axis. More usefully, you can plot the ECDFs of two or more films on the same graph so that you can compare their shot length distributions. Figure 1 shows the empirical cumulative distribution functions of Easy Virtue and The Skin Game (1931 – access the data here).

Figure 1 The empirical cumulative distribution functions of Easy Virtue (1928) and The Skin Game (1931)

Now clearly there is a problem with this graph: because the shot length distribution of a film is positively skewed all the shots are bunched up on the left-hand side of the plot and you cannot see any detail. This can be resolved by redrawing the x-axis on a logarithmic scale, which stretches out the bottom end of the data which has all the detail and squashing the top end which has only a few data points. This can be seen in Figure 2.

Figure 2 The empirical cumulative distribution functions of Easy Virtue (1928) and The Skin Game (1931) on a log-10 scale

These two graphs present exactly the same information, but at least in Figure 2 we can find the information we want. In transforming the x-axis we have not assumed the shot length distribution of either film follows a lognormal distribution – which is just as well because this is obviously not true for either film.

Now what can we discover about the editing in these two films?

First, it is clear that these two films have same median shot length because the probability of randomly selecting a shot less than or equal to 5.0 seconds is 0.5 in both films. The definition of the median shot length is the value that divides a data set in two so that half are less than or equal to x and greater than or equal to x (i.e P(x ≤ X) = 0.5. We might therefore conclude that they have the same style. However, these two films clearly have different shot length distributions and it is easier to appreciate this when we combine numerical descriptions with a plot of the actual distributions.

A basic rule for interpreting the plot of ECDFs for two films is that if the plot for film A lies to the right of the plot for film B then film A is edited more slowly. Obviously this is not so clear cut in Figure 2.

Below the median shot length, the ECDF of The Skin Game lies to the left of that of Easy Virtue indicating that at those shot lengths it has a greater proportion of shots at the low-end of the distribution: for example, 25% of the shots in The Skin Game are less than or equal to 2.0 seconds in length compared to just 6% of the shots in Easy Virtue. This would seem to indicate that The Skin Game is edited more quickly than Easy Virtue. At the same time we see that above the median shot length that the ECDF of The Skin Game lies to the right of that of Easy Virtue indicating that it has a lower proportion of shots at the high-end of the distribution: for example, 75% of the shots in Easy Virtue are less than or equal to 8.3 seconds compared to 66% of the shots in The Skin Game. This would appear to suggest that The Skin Game is edited more slowly than Easy Virtue. Clearly there is something more interesting going on than indicated by the equality of the medians, and the answer lies in how spread out the shot lengths of these two films. The ECDF of Easy Virtue is very steep and covers only a limited range of values, where as the ECDF of The Skin Game covers a much wider range of shot lengths. The interquartile range of Easy Virtue is 5.2 seconds (Q1 = 3.1s, Q2 = 8.3s) indicating the shot lengths of this film are not widely dispersed; while the IQR of The Skin Game is 12.7s (Q1 = 2.0s, Q3 = 14.7s).

This example is an excellent demonstration of why it is important to always provide a measure of the dispersion of a data set when describing film style. It is not enough to only provide the average shot length since two films may have the same median shot length and completely different editing styles. See here for a discussion of appropriate measures of scale that can be used. It should be standard practice that an appropriate measure of dispersion is cited along with the median shot length for a film by any researcher who wants to do statistical analysis of film style, and journal editors and/or book publishers who receive work where this is not the case should send it back immediately with a note asking for a proper description of a film’s style. If you don’t include any description – either numerical or graphical – of the dispersion of shot lengths in a film then you haven’t described your data properly.

We can also use the ECDFs for two films to perform a statistical test of the null hypothesis that they have the same distribution. This is called the Kolmogorov-Smirnov (KS) test, and the test statistic is simply the maximum value of the absolute differences between the ECDF of one film (F(x)) and the ECDF of another film (G(x)) for every value of x. The ‘absolute difference’ means that you subtract one from the other and then take only size of the answer and ignore the sign (i.e. ignore if its positive or negative):

Table 2 shows this process for the two films in Figures 1 and 2.

Table 2 Calculating the Kolmogorov-Smirnov test statistic for the ECDFs of Easy Virtue (1928) and The Skin Game (1931)

In the first column in Table 2 we have the lengths of the shots from the smallest in the two films (0.6 seconds) to the longest (174.7 seconds), and then in columns two and three we have the ECDF of each film. Column four is the difference between the ECDFs of the two films, subtracting the ECDF of The Skin Game from the ECDF of Easy Virtue for every x: so when x = 0.6, we have 0-0.0037 = -0.0037. The final column is the absolute difference, which is just the size of the value in the fourth column and the sign is ignored: the absolute value of -0.0037 is 0.0037. Do this for every value of x and find the largest value in the final column.

In the case of these two films the maximum absolute difference occurs when x = 2.0 and is statistically significant (p < 0.01). Therefore we conclude these two films have different shot length distributions. (You may find that different statistics software give slightly different answers to this depending on the plotting position used).

An online calculator for the KS-test that will also draw a plot of the ECDFs can be accessed here, and is accompanied by a very useful explanation. (NB: this only works for data sets up to N = 1024). Rescaling the x-axis of our plot of the two ECDFs does not affect the KS-test since the ECDFs are on the y-axis and D column in Table 2 is the vertical difference between them.

(There is also a one-sample of the KS-test for comparing a single distribution to a theoretical distribution to determine goodness-of-fit, but there are so many other methods that do exactly the same thing better that it’s not worth bothering with).

The ECDF is very easy to calculate, the graph is very easy to produce and provides a lot of information about a data set for every little effort, and the KS-test is also a very simple way of comparing two data sets. There is no bewildering mathematics involved: just count, divide, add, subtract, and ignore. The statistical analysis of film style really is this easy.

Robust time series analysis of ITV news bulletins

I have mentioned numerous times on this blog the importance of using robust statistics to describe film style. This week I continue in this vein, albeit in a different context – time series analysis. In a much publicised piece of work James Cutting, Jordan De Long, and Christine Nothelfer (2010) calculated partial autocorrelation functions and a modified autoregressive index for a sample of Hollywood films. While I have no problems with the basis of this research, I do think the results are dubious due to the use of non-robust methods to determine the autocovariance between shot lengths in these films. The paper attached below analyses the editing structure of the set of ITV news bulletins I discussed in a paper last year, comparing the results produced using classical and robust autocovariance functions.

Robust time series analysis of ITV news bulletins

In this paper we analyse the editing of ITV news bulletins using robust statistics to describe the distribution of shot lengths and its editing structure. Commonly cited statistics of film style such as the mean and variance do not accurately describe the style of a motion picture and reflect the influence of a small number of extreme values. Analysis based on such statistics will inevitably lead to flawed conclusions. The median and  are superior measures of location and dispersion for shot lengths since they are resistant to outliers and unaffected by the asymmetry of the data. The classical autocovariance and its related functions based on the mean and the variance is also non-robust in the presence of outliers, and leads to a substantially different interpretation of editing patterns when compared to robust time statistics that are outlier resistant. In general, the classical methods underestimate the persistence in the time series of these bulletins indicating a random editing process whereas the robust time series statistics suggest an AR(1) or AR(2) model may be appropriate.

The pdf file is here: Nick Redfern – Robust Time Series Analysis of ITV News Bulletins

My original post on the time series analysis of ITV news bulletins can be accessed here, along with the datasets for each of the fifteen bulletins.

My new results indicate the conclusions of Cutting, De Long, and Nothelfer are flawed, and that it is very likely they have underestimated the autocovariance present in the editing of Hollywood films. The discrete and modified autoregressive indexes they present are likely to be too low, though there may be some instances when they are actually too high. This is not enough to reject their conclusion that Hollywood films have become increasingly clustered in packets of shots of similar length, and I have not yet applied this method to their sample of films. It is, however, enough to recognise there are some problems with the methodology and the results of this research.

References

Cutting JE, Delong JE, and Nothelfer CE 2010 Attention and the evolution of Hollywood film, Psychological Science 21 (3): 432-439.

Revealing narrative structure through aesthetic analysis

This week some papers relating to the discovery of narrative structure in motion pictures based on the patterns of aesthetic elements. But first, many of the papers on statistical analysis of film style in this post and on many others across this blog are co-authored by Svetha Venkatesh from Curtin University’s Computing department, and her home page – with links to much research relevant to film studies – can be accessed here.

Adams B, Venkatesh S, Bui HH, and Dorai C 2007 A probabilistic framework for extracting narrative act boundaries and semantics in motion pictures, Multimedia Tools and Applications 27: 195-213.

This work constitutes the first attempt to extract the important narrative structure, the 3-Act storytelling paradigm in film. Widely prevalent in the domain of film, it forms the foundation and framework in which a film can be made to function as an effective tool for storytelling, and its extraction is a vital step in automatic content management for film data. The identification of act boundaries allows for structuralizing film at a level far higher than existing segmentation frameworks, which include shot detection and scene identification, and provides a basis for inferences about the semantic content of dramatic events in film. A novel act boundary likelihood function for Act 1 and 2 is derived using a Bayesian formulation under guidance from film grammar, tested under many configurations and the results are reported for experiments involving 25 full-length movies. The result proves to be a useful tool in both the automatic and semi-interactive setting for semantic analysis of film, with potential application to analogues occurring in many other domains, including news, training video, sitcoms.

Chen H-W, Kuo J-H, Chu W-T, Wu J-L 2004 Action movies segmentation and summarization based on tempo analysis, 6th ACM SIGMM International Workshop on Multimedia Information Retrieval, New York, NY, 10-16 October, 2004.

With the advances of digital video analysis and storage technologies, also the progress of entertainment industry, movie viewers hope to gain more control over what they see. Therefore, tools that enable movie content analysis are important for accessing, retrieving, and browsing information close to a human perceptive and semantic level. We proposed an action movie segmentation and summarization framework based on movie tempo, which represents as the delivery speed of important segments of a movie. In the tempo-based system, we combine techniques of the film domain related knowledge (film grammar), shot change detection, motion activity analysis, and semantic context detection based on audio features to grasp the concept of tempo for story unit extraction, and then build a system for action movies segmentation and summarization. We conduct some experiments on several different action movie sequences, and demonstrate an analysis and comparison according to the satisfactory experimental results.

Hu W, Xie N, Li L, Zeng X, and Maybank S 2011 A survey on visual content-based video indexing and retrieval, IEEE Transactions On Systems, Man, and Cybernetics—Part C: Applications And Reviews, 41 (6): 797-819.

Video indexing and retrieval have a wide spectrum of promising applications, motivating the interest of researchers worldwide. This paper offers a tutorial and an overview of the landscape of general strategies in visual content-based video indexing and retrieval, focusing on methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, video retrieval including query interfaces, similarity measure and relevance feedback, and video browsing. Finally, we analyze future research directions.

Moncrieff S and Venkatesh S 2006 Narrative structure detection through audio pace, IEEE Multimedia Modeling 2006, Beijing, China, 4–6 Jan 2006

We use the concept of film pace, expressed through the audio, to analyse the broad level narrative structure of film. The narrative structure is divided into visual narration, action sections, and audio narration, plot development sections. We hypothesise that changes in the narrative structure signal a change in audio content, which is reflected by a change in audio pace. We test this hypothesis using a number of audio feature functions, that reflect the audio pace, to detect changes in narrative structure for 8 films of varying genres. The properties of the energy were then used to determine the audio pace feature corresponding to the narrative structure for each film analysed. The method was successful in determining the narrative structure for 7 of the films, achieving an overall precision of 76.4 % and recall of 80.3%. We map the properties of the speech and energy of film audio to the higher level semantic concept of audio pace. The audio pace was in turn applied to a higher level semantic analysis of the structure of film.

Murtagh F, Ganz A, and McKie S 2009 The structure of narrative: the case of film scripts, Pattern Recognition 42 (2): 302-312.

We analyze the style and structure of story narrative using the case of film scripts. The practical importance of this is noted, especially the need to have support tools for television movie writing. We use the Casablanca film script, and scripts from six episodes of CSI (Crime Scene Investigation). For analysis of style and structure, we quantify various central perspectives discussed in McKee’s book, Story: Substance, Structure, Style, and the Principles of Screenwriting. Film scripts offer a useful point of departure for exploration of the analysis of more general narratives. Our methodology, using Correspondence Analysis and hierarchical clustering, is innovative in a range of areas that we discuss. In particular this work is groundbreaking in taking the qualitative analysis of McKee and grounding this analysis in a quantitative and algorithmic framework.

Phung DQ , Duong TV, Venkatesh S, and Bui HH 2005 Topic transition detection using hierarchical hidden Markov and semi-Markov models, 13th Annual ACM International Conference on Multimedia, 6-11 November 2005, Singapore.

In this paper we introduce a probabilistic framework to exploit hierarchy, structure sharing and duration information for topic transition detection in videos. Our probabilistic detection framework is a combination of a shot classification step and a detection phase using hierarchical probabilistic models. We consider two models in this paper: the extended Hierarchical Hidden Markov Model (HHMM) and the Coxian Switching Hidden semi-Markov Model (S-HSMM) because they allow the natural decomposition of semantics in videos, including shared structures, to be modeled directly, and thus enable efficient inference and reduce the sample complexity in learning. Additionally, the S-HSMM allows the duration information to be incorporated, consequently the modeling of long-term dependencies in videos is enriched through both hierarchical and duration modeling. Furthermore, the use of Coxian distribution in the S-HSMM makes it tractable to deal with long sequences in video. Our experimentation of the proposed framework on twelve educational and training videos shows that both models outperform the baseline cases (flat HMM and HSMM) and performances reported in earlier work in topic detection. The superior performance of the S-HSMM over the HHMM verifies our belief that the duration information is an important factor in video content modelling.

Pfeiffer S and Srinivasan U 2002 Scene determination using auditive segmentation models of edited video, in C Dorai and S Venkatesh (eds.) Computational Media Aesthetics. Boston: Kluwer Academic Publishers: 105-130.

This chapter describes different approaches that use audio features for determination of scenes in edited video. It focuses on analysing the sound track of videos for extraction of higher-level video structure. We define a scene in a video as a temporal interval which is semantically coherent. The semantic coherence of a scene is often constructed during cinematic editing of a video. An example is the use of music for concatenation of several shots into a scene which describes a lengthy passage of time such as the journey of a character. Some semantic coherence is also inherent to the unedited video material such as the sound ambience at a specific setting, or the change pattern of speakers in a dialogue. Another kind of semantic coherence is constructed from the textual content of the sound track revealing for example the different stories contained in a news broadcast or documentary. This chapter explains the types of scenes that can be constructed via audio cues from a film art perspective. It continues on a discussion of the feasibility of automatic extraction of these scene types and finally presents existing approaches.

Weng C-Y, Chu W-T, and Wu J-L 2009 RoleNet: movie analysis from the perspective of social networks, IEEE Transactions on Multimedia 11(2): 256-271.

With the idea of social network analysis, we propose a novel way to analyze movie videos from the perspective of social relationships rather than audiovisual features. To appropriately describe role’s relationships in movies, we devise a method to quantify relations and construct role’s social networks, called RoleNet. Based on RoleNet, we are able to perform semantic analysis that goes beyond conventional feature-based approaches. In this work, social relations between roles are used to be the context information of video scenes, and leading roles and the corresponding communities can be automatically determined. The results of community identification provide new alternatives in media management and browsing. Moreover, by describing video scenes with role’s context, social-relation-based story segmentation method is developed to pave a new way for this widely-studied topic. Experimental results show the effectiveness of leading role determination and community identification. We also demonstrate that the social-based story segmentation approach works much better than the conventional tempo-based method. Finally, we give extensive discussions and state that the proposed ideas provide insights into context-based video analysis.

The editing structure of Follow the Fleet (1936)

This I look at the editing structure of the Fred Astaire-Ginger Rogers musical Follow the Fleet (1936). I looked at the structure of Top Hat in an earlier post, which you can find here. Figure 1 presents the order structure matrix of Follow the Fleet, in which white columns indicate shorter shots and darker patches represent clusters of longer takes. A spreadsheet with the raw data (from a PAL DVD and corrected by 1.4016) can be accessed here: Nick Redfern – Follow the Fleet. The opening and closing credits have not been included.

Figure 1 Order structure matrix of Follow the Fleet (1936)

The editing of this film doesn’t show the same clear pattern of alternating between quicker and slower cut segments we see in Top Hat. Follow the Fleet is certainly cut much more slowly, with a median shot length of 7.5 seconds and an interquartile range of 10.4 seconds compared to Top Hat’s median of 5.5s and IQR of 7.2s. In the earlier film the different editing patterns were associated with musical numbers and comedy sequences, but Follow the Fleet lacks the comedy element. Randolph Scott is, I’m afraid to say, terribly dull in this film (and calling his character ‘Bilge’ doesn’t help). The spark between Astaire and Rogers that drives Top Hat, especially in the first section set in London, is missing here to and at nearly two hours long this film doesn’t hold the same interest. It somehow achieves the stunning feat of being both lacking in plot and predictable. There does not appear to be any particular trend over time in the editing structure, and this may be due to the high variability of shot lengths. The IQR noted above is much greater than appears to be typical for Hollywood films of the 1930s (or indeed any period), and so the time series in the order structure matrix looks relatively featureless.

Those features that do stand out in the matrix are those sequences comprising several longer takes and these are typically associated with the musical numbers. However, not all musical numbers are associated with such clusters. For example, ‘We saw the sea’ (shots 1-8) and Harriet Hillard singing ‘Get thee behind me Satan’ (shots 124-128) do not immediately jump out at you; while the dark column between shots 270 and 286 is ‘I’d rather lead a band,’ running to 351.1 seconds with its extended dance sequence on-board ship, is instantly recognisable.

‘Let yourself go’ appears several times throughout the film, making its bow with Rogers singing between shots 59 and 67, with the comic dance competition to this tune running from shots 132-150. These numbers are not associated with the sort of clusters of longer shots we see in the second half of the matrix, though they are generally slower than other sequences in the first 35 minutes of the film. Rogers’ solo tap dance audition is shot 317, and is followed by a cluster of short shots (319-325) when Astaire overhears how successful she is and decides to sabotage her singing audition. The subsequent disastrous reprise of ‘Let yourself go’ after Rogers’ drink has been spiked occurs at shots 333 to 338. Hillard singing ‘But where are you?’ begins and ends at shots 356 and 359, respectively, but this does not show up in the matrix as distinguishable from the shots around it.

The musical sequence featuring ‘I’m putting all my eggs in one basket’ begins at shot 416, with Astaire playing piano, and the number itself starts at shot 421 and runs until shot 428 for a total of 334.2 seconds. The most famous sequence from this film accounts for the cluster of long shots from 506 to 534, and includes ‘Let’s face the music and dance.’ The number itself only accounts for the last 2 shots running to 286.0 seconds.

Both Top Hat and Follow the Fleet were directed by Mark Sandrich, and David Abel was the cinematographer for both films. Top Hat was edited by William Hamilton, whereas Follow the Fleet was edited by Henry Berman. We do not know enough about RKO’s mode of production to determine how the working relationship between these and other filmmakers was structured, and so we will have to wait and see what the editing structure of other musicals in the Fred Astaire and Ginger Rogers series for the studio will tell us about the authorship of these films (if, indeed, there is any such person).

On researching genre

Last year I wrote a piece on genre trends at the US box office over the past two decades, which you can find here. I submitted this piece to the European Journal of American Culture, and having done some revisions I heard from the editor yesterday that it is likely to be published later in the year. This week I want to comment briefly on a point raised in the peer review process regarding the problems of researching genre.

In my paper I sorted films achieving high box office rankings into nine broad categories: ‘action/adventure,’ ‘comedy,’ ‘crime/thriller,’ ‘drama,’ ‘family,’ ‘fantasy/science fiction,’ ‘horror,’ ‘romance,’ and ‘other.’ The reviewer raised the following point:

… it was never clear to me, at least, on what basis the generic trends they isolated and analysed were identified, are they drawn from industry accepted classifications, or are they drawn from the authors’ observations? ‘Family,’ ‘romance,’ ‘comedy,’ ‘fantasy/science fiction’ maybe self-explanatory, but what’s the difference between action/adventure and the latter, or between it and crime/thriller? And what constitutes a “drama”? Perhaps a fuller discussion/review of the cycles of films that make up the trends they have identified would make classification less problematic …

This clearly relates to the four problems of genre definition described by Robert Stam (2000: 128-129):

  • Extension: generic labels are either too broad or too narrow;
  • Normativism: having preconceived ideas of criteria for genre membership;
  • Monolithic definitions: as if an item belonged to only one genre;
  • Biologism: a kind of essentialism in which genres are seen as evolving through a standardised life cycle.

To these we can add the ‘empiricist dilemma’ of analysing genre films to determine which genres they belong to and why only after we have first defined the genres themselves (Tudor 1974).

There are no simple definitions of genres, and trying to solve this riddle has probably driven several film scholars o despair. In fact, one of the two things that everyone agrees on when discussing genres is that no-one agrees about genre definitions. For example, in 1975 Douglas Pye warned against treating genres as Platonic forms that are ‘essentially definable’ and of approaching genre criticism ‘as in need of defining criteria’ (Pye 1975: 30, original emphasis). The same argument is made by David Bordwell 14 years later, arguing there is no fixed system of genre definitions in the film industry or film studies and that no strictly deductive set of principles is capable of explaining genre groupings (1989: 147). In 2008 Raphaëlle Moine writes of being in the ‘genre jungle’ that we are unable to clear with ‘a few machete blows as strong as they were lethal;’ and that not only are definitions of individual genres problematic, the very concept of genre itself and how it functions for producers and audiences is itself ‘neither definitive, nor perfect, nor incontestable’ (2008: 27).

If we consider film genres as categories of classification, one can only note the vitality of generic activity at an empirical level, and the impossibility of organizing cinema dogmatically into a definitive and universal typology of genres at a theoretical level. Categories exist but they are not impermeable. They may coincide at certain points, contradict one another, and are the product of different levels of differentiation or different frames of reference (Moine 2008: 24).

I think that this sums up the problems of researching genre very simply and very clearly. What it doesn’t do is help me with the reviewer’s comments. In fact, it makes them more complicated since we have to acknowledge that ‘family,’ ‘romance,’ ‘comedy,’ and ‘fantasy/science fiction’ are not as unproblematic as we might at first suspect. This is in fact obvious in the above comments: the reviewer immediately questions the distinction between ‘fantasy/science fiction’ and ‘action/adventure,’ and so there is clearly some doubt here. So what should I do?

One solution is to give up. We could simply admit that genres are undefinable, that it is pointless to even attempt any sort of genre analysis given that we cannot begin to describe the object of inquiry or to delineate any individual genres, and regard all genre scholarship as inherently flawed.

This is a ridiculous approach to take since genre categories are obviously widely used by the film industry and by audiences day-to-day in a diverse set of contexts. This is other thing that everyone agrees upon: genre is important. And if it is important then it is definitely something that should be the subject of empirical analysis. So, again, what should I do?

The solution I arrived at was to recognise the subjective nature of genre definitions, but to also make a distinction between ‘subjective’ and ‘arbitrary.’ My inspiration in this was Bayesian probability theory. For a brief overview on Bayes’ theorem and a demonstration of its use see my earlier post on modelling narrative comprehension here. In Bayesian theory probabilities express an agent’s degree of belief in a statement: so a statement like ‘I think there is a 80% chance of rain this afternoon’ is a my belief that it will rain after midday expressed as a probability [1]. The Bayesian approach assumes I am rational agent who holds an opinion about the likelihood of an event based on the available information (the forecast is for rain, it’s the autumn, I live in the north of England, etc). As I acquire new information I can update this probability and revise the intensity of my belief by applying Bayes’ theorem. My belief is subjective but it is not arbitrary: Pierre-Simon Laplace referred to probability in this sense being ‘only good sense reduced to calculus.’

A criticism of the Bayesian approach to probability is that it is subjective and that because different agents have possess different amounts of information the probabilities they express tell us nothing about the world and refer only to the opinions themselves. We cannot therefore arrive at the same conclusions about data since we start at different places. The Bayesian argument against this is based on two principles:

  1. Our beliefs are based on defensible reasoning and evidence.
  2. Through an ongoing process of analysis (accumulation of data, reviewing methodologies and assumptions, etc.) differences in prior positions are resolved and consensus is reached.

Described in these terms, Bayesian probability is itself a model of an ongoing process of scientific inquiry in which differences of opinion are acknowledged and resolved by examining and re-examining data and methods so that clear conclusions may be reached because the weight attached to the evidence comes to carry more than our prior beliefs as we learn more and more about the system we are studying.

The Bayesian argument is I think useful for thinking about researching genre. I’m not advocating that we should start calculating probabilities for our degrees of belief in genres; only that we should use this approach to reasoning as a model for understanding how we conduct research in situations where we do not have definite categories. The statistician CR Rao put it in the following terms: uncertain knowledge + knowledge of amount of uncertainty = useful knowledge. We want useful knowledge about genre, and we can get it despite our uncertainty about genres.

The results of my study of recent genre trends at the US box office found that a limited range of special effects-based films from the action/adventure and fantasy/science fiction genres have come to dominate the US box office at the expense character- and narrative-driven films (crime/thriller and drama films) that were previously identified as the most popular. These results are similar to those reported by Lu et al. (2005) and Ji and Waterman (2010) who found that the five most frequently occurring genres were action, adventure, comedy, thriller, and drama; and that all but the last of these had increased in frequency at the highest box office rankings while drama films had declined from being the most frequently occurring of these genres in 1967-1971 to the least frequently occurring in the period 2002-2004. These papers used a different method of assigning films to genres and yet my results broadly corroborate their conclusions. Now the authors of these studies and myself both acknowledge that genre definition is a methodological problem, but since we now have some evidence and methods to evaluate we can start to pick out the key facts:

  1. the increasing dominance of spectacle-based technology-driven genres at the US box office
  2. the decline of ‘technology-unamenable’ genres

We can also pick out some points of difference. For example, my results indicate a decline in crime/thriller films, whereas these other studies do not. This may result from different ways in which films are classified, the different time periods covered by the studies (1960s-2000s or 1991-2010), or how deeply we go into the box office rankings (top 20 or top 50), and so on. But at least we can begin to understand why these differences occur and work towards resolving them because the papers give a description of their methodologies.

Thus, despite the fact that no-one agrees on genre definitions, we can come to some consensus about the main genre trends in the US. Not because we have plucked them out of thin air, but because we have a way of dealing with the inherent uncertainty with which researchers must cope. Despite the fact that we start from different places, we can arrive at the similar conclusions and thereby establish a body of useful knowledge. This does not mean that we should view these studies as being mutually supporting since relying on the principle of non-contradiction as a basis for empirical research leads to all sorts of ridiculous arguments (see here). But it does mean that as we update our knowledge and review our methods we can begin to build consensus rather than bemoaning the lack of agreement about the definitions of genres. Just as producers and audiences use genre categories every day with seemingly few problems, so do film scholars; and any conclusions we may come to are far more interesting than a recitation of the problems described above. Afterall, there is quite a lot of research on genre in film studies.

When conducting empirical research on genre we should bear in mind the following:

  • The genre definitions used by scholars are subjective but they are not arbitrary, being based on defensible reasoning
  • Empirical studies of genre need to be replicated to test conclusions
  • Replication of studies is required to identify where differences do in fact occur
  • Film scholars need to spend less time thinking about the problems of genre and devote more effort to accounting for the methodologies they do use so that others may properly evaluate their conclusions
  • The study of genre is an ongoing reflexive process

Genre may be a matter of opinion, but it is orderly opinion based on reasoned judgements, and the empirical study of genre is a reflexive, scientific process that arrives at definite, useful, and interesting conclusions even though we often start from different places.

Notes

  1. Eric Rohmer’s Ma nuit chez Maud/My Night at Maud’s (1969) features a discussion of Pascal’s wager in an early scene between Jean-Louis and Vidal that includes the concepts of expectation and utility (‘Mathematical hope: potential gain divided by probability’), the expression of subjective (i.e. Bayesian) probabilities, and the terms ‘hypothesis,’ ‘likely,’ ‘chance,’ ‘odds,’ ‘probability,’ and ‘infinite.’

References

Bordwell D 1989 Making Meaning: Inference and Rhetoric in the Interpretation of Cinema. Cambridge, MA: Harvard University Press.

Ji S and Waterman D 2010 Production Technology and Trends in Movie Content: An Empirical Study. Working Paper, Department of Telecommunications, Indiana University, Bloomington, IN.

Lu W, Waterman D, and Yan MZ 2005 Changing markets, new technologies, and violent conduct: an economic study of motion picture genre trends, The 33rd Annual Telecommunications Policy Research Conference, 23-25 September 2005, Washington, DC.

Moine R 2008 Cinema Genre, trans. Alistair Fox and Hilary Radner. Malden, MA: Blackwell.

Pye D 1975 Genre and movies, Movie 20: 29-43.

Stam R 2000 Film Theory: An Introduction. Oxford: Blackwell.

Tudor A 1974 Theories of Film. London: Secker and Warburg.

Statistical illiteracy in film studies

UPDATE: The paper at the end of this post is now available for advance access at Literary and Linguistic Computing, and can be cited as: The log-normal distribution is not an appropriate parametric model for shot length distributions of Hollywood films, Literary and Linguistic Computing, Advance Access published December 13, 2012, doi:10.1093/llc/fqs066. I will put up the paginated reference when the print version is released.

This week’s post combines two very different approaches to film studies: on the one hand we have outright anger, and then we have proper research. Both are equally important.

Are you au fait with this?

I wrote a second version of my paper examining the impact of sound technology on shot length distributions of Hollywood films using a larger sample of films. I also expanded on the methodology used (Mann-Whitney U, probability of superiority, etc.) since this has been highlighted as a problem before. (The original version is here). Having finished the article I sent it to The New Soundtrack at Edinburgh University Press. The article was turned down 24 hours later, and the reason given for rejecting the article was that, in the editors opinion,

our readership might not be quite au fait with the methodology you describe in the piece.

Nothing about the quality of the piece; just the lack of confidence the editors have in their readership.

What sort of intellectual cowardice is this? Are film scholars afraid of learning new things? Or is it that journal editors have such a low opinion of their readership that they need to protect them from anything that might be new or unusual ? Does the readership of The New Soundtrack really not know what a ‘median’ is? Is there no sense of intellectual discovery?

If I was part of the readership of The New Soundtrack I would be very unhappy with this. Presumably, if I am a subscriber to an academic journal I am (or at least consider myself to be) a reasonably intelligent person capable of thinking and learning for myself. (Perhaps I am part of the sophisticated readership of Screen as well). Do I really need someone to decide for me what I might or might not be au fait with? Now I’m wondering what other research I’ve missed out on because an editor has decided what I might or might not be comfortable with.

Have you ever heard anything so pathetic? I have, and this is now the third time I have had a journal reject one of my articles because of the use of statistical methods (see here).

Statistical literacy

Statistical literacy is defined as

the ability to understand and critically evaluate statistical results that permeate our daily lives – coupled with the ability to appreciate the contributions that statistical thinking can make in public and private, professional and personal decisions (Wallman 1993: 1).

This is relevant to film studies because we encounter statistical information in diverse contexts. Statistics is relevant in film and television studies in the study of film style, in researching the economics of the film industry, in audience studies, and in scientific research on cognition and perception in the cinema. Understanding a great deal of research of film studies assumes that you have at least some degree statistical literacy.

Gal (2002) argues that statistical literacy comprises two elements:

  • a knowledge component, in which individuals have the ability to ‘interpret and critically evaluate statistical information, data-related arguments, or stochastic phenomena which they may encounter in diverse contexts’ (2); and
  • a dispositional component, in which individuals develop a questioning attitude to research that purports to be based on data, a positive view of themselves as ‘individuals capable of statistical and probabilistic reasoning as well as a willingness and interest to “think statistically” in relevant situations’ and ‘a belief in the legitimacy of critical action even if they have not learnt much formal statistics or mathematics’ (19).

Arguably the dispositional component is the most important since the willingness to the think statistically is a pre-requisite for learning statistical concepts.

It is clear that the editors of The New Soundtrack have concerns about the statistical literacy of their readership. The editors apparently assume their readership will not have the required statistical knowledge to understand research presenting statistical analysis of data, and – much more damaging – they do not believe their readership has the capability or willingness to think statistically.

Altman (2002) notes readers assume that articles published in peer-review journals are scientifically sound. But in order to make an informed interpretation of the material that appears in peer-reviewed sources we need to be able to intelligently interpret it. This means that statistical literacy is a must for film studies, and it is a topic we will return to repeatedly over the rest of the year. In the next section I demonstrate how knowledge of statistical concepts and process and a questioning attitude are essential in judging the importance of research in film studies.

 The lognormal dragon is slain (again)

An example of the importance of developing statistical literacy in film studies comes in the form of a new book to be published this year featuring a chapter by Jordan De Long, Kaitlin L. Brunick, and James E. Cutting. The link to an online version of this paper is below Figure 1. I won’t explain the statistical concepts in detail, but I have provided links for statistical terms and concepts. I will assume you are an intelligent reader capable of and willing to learn for yourself.

In their chapter on film style, the authors make the following statement about the average shot length (ASL) as a statistic of film style:

Despite being the popular metric, ASL may be inappropriate because the distribution of shot lengths isn’t a normal bell curve, but rather a highly skewed, lognormal distribution. This means that while most shots are short, a small number of remarkably long shots inflate the mean. This means that the large majority of shots in a film are actually below average, leading to systematic over-estimation of individual film’s shot length. A better estimate is a film’s Median Shot Length, a metric that … provides a better estimate of shot length.

In support of this statement they include a graph that purports to show how the shot length distribution of one film is lognormal. This is the only piece of evidence they provide.

Figure 1 Histogram of shot lengths in A Night at the Opera with a fitted lognormal distribution from De Long J, Brunick KL, and Cutting JE 2012 Film through the human visual system: finding patterns and limits, in JC Kaufman and DK Simonton (eds.) The Social Science of Cinema. New York: Oxford University Press: in press. This graph was downloaded from the online version of this paper available at http://people.psych.cornell.edu/~jec7/pubs/socialsciencecinema.pdf.

Clearly, the authors have assumed their readership has a fairly sophisticated level of statistical literacy. They present their argument assuming you will be able to understand it or be capable of learning the relevant concepts. An entirely reasonable way in which to present an argument in a research output, and presumably an attitude that comes from their scientific (rather than film studies) background.

It’s just a shame it’s not true.

The key fact to bear in mind is that a variable (such as shot length) is said to be lognormally distributed if its logarithm is normally distributed, as this allows us to apply a logarithmic transformation and then to try to determine if it is normally distributed.

Figure 2 presents an exploratory data analysis of the data for this film, which can be accessed at here.

Figure 2 Exploratory data analysis of shot lengths in A Night at the Opera

In the top left panel we see the histogram of the log-transformed shot lengths and it is immediately obvious this data set is not normally distributed. If De Long, Brunick, and Cutting are right, then this chart should be symmetrical about the mean. The histogram remains skewed even after the transformation is applied. The same pattern can be seen from the kernel density estimate (top right), which is clearly not symmetrical.

The normal probability plot (bottom left) shows the same pattern. If the data does come from a lognormal distribution then the points in this plot will be a straight line. In fact, they will lie along the red line shown in the plot. It is obvious that this is not the case and that the data points show clear evidence of a skewed data set. In the lower tail of the fitted lognormal distribution underestimates the number of shorter takes, while the upper tail overestimates the number of longer takes. Definitively NOT lognormal.

Finally, the box plot (bottom right) clearly shows the distribution is asymmetrical with outliers in the upper tail of the distribution. This is a good example of the fat that log-transforming does not always remove the skew from a data set of deal with the problem of outliers.

The marks below histogram, kernel density, and box plot (called a rug) indicate the actual values of the log-transformed shot lengths.

We can also apply formal statistical tests of the hypothesis that the shot length distribution is lognormal. Because a variable is lognormally distributed if its logarithm is normally distributed, then all we have to do is to apply normality tests to the transformed data.

The Shapiro-Francia test is based on the squared correlation of the theoretical and sample quantiles in the probability plot in Figure 2. For this film, the test statistic is 0.9585  and p = <0.01, so it is extremely unlikely that this data comes from a lognormal distribution and we have sufficient evidence to reject this hypothesis.

The Jarque-Bera test does the same thing in a different way. This test looks at the skew (its symmetry) and the kurtosis (the shape of its peak) of the data. For A Night at the Opera, the result of this test is 62.48 (p = <0.01) and again we have sufficient evidence to reject the hypothesis that this data comes from a lognormal distribution.

In summary, De Long, Brunick, and Cutting present a single piece of evidence in support of the assertion that shot length distributions are lognormal and its wrong. In fact, if you wanted to write a book about how shot length distributions are not lognormal and wanted to put an example of this on the cover then A Night at the Opera would be the film you would use.

Clearly, there is a problem with the histogram in Figure 1 that shows the shot length data on an untransformed scale. The reason for applying a logarithmic transformation is to make it easier to see the structure of the data, so why not view it on a logarithmic scale? When we view the data on a logarithmic scale we come to the opposite conclusion than that presented above. It requires statistical literacy on the part of the reader to question if this is an appropriate way of presenting data and to question the interpretation presented by the authors.

Obviously we cannot say that just because we can show that one film is not lognormally distributed this is true for all films. In order to properly assess the validity of De Long, Brunick, and Cutting’s assertion we need to test a sample of films representing a defined population, and this is precisely what I have done. The following paper demonstrates that the argument that the lognormal distribution is not an appropriate parametric model for the shot length distributions of Hollywood films:

Nick Redfern – The lognormal distribution and Hollywood cinema

Abstract

We examine the assertion that the two-parameter lognormal distribution is an appropriate parametric model for the shot length distributions of Hollywood films. A review of the claims made in favour of assuming lognormality for shot length distributions finds them to be lacking in methodological detail and statistical rigour. We find there is no supporting evidence to justify the assumption of lognormality in general for shot length distributions. In order to test this assumption we examined a total of 134 Hollywood films from 1935 to 2005, inclusive, to determine goodness-of-fit of a normal distribution to log-transformed shot lengths of these films using four separate measures: the ratio of the geometric mean to the median; the ratio of the shape factor σ to the estimator σ*=√(2*ln(m/M); the Shapiro-Francia test; and the Jarque-Bera test. Normal probability plots were also used for visual inspection of the data. The results show that, while a small number of films are well modelled by a lognormal distribution, this is not the case for the overwhelming majority of films tested (125 out of 134). Therefore, we conclude there is no justification for claiming the lognormal distribution is an adequate parametric model of shot length data for Hollywood films, and recommend the use of robust statistics that do not require underlying parametric models for the analysis of film style.

Placing this paper alongside my earlier posts testing the lognormality of shot length distributions for Hollywood films prior to 1935 (see here), we can now finally conclude there is no evidence to justify for assuming this model for Hollywood films in general.

References

Altman DG 2002 Poor-quality medical research: what can journals do?, Journal of the American Medical Association 287 (21): 2765-2767.

De Long J, Brunick KL, and Cutting JE 2012 Film through the human visual system: finding patterns and limits, in JC Kaufman and DK Simonton (eds.) The Social Science of Cinema. New York: Oxford University Press: in press.

Gal I 2002 Adults’ statistical literacy: meanings, components, responsibilities, International Statistical Review 70 (1): 1-51.

Wallman KK 1993 Enhancing statistical literacy: enriching our society, Journal of the American Statistical Association 88 (421): 1-8.

Opinion or fact?

The Artist has been wowing audiences across the world. The film has already won some awards, and is hotly tipped for many more. It has also been attracting much interest in the press, and film scholars have been roped into this.

In an interview with the BBC, silent film expert Bryony Dixon of the BFI made a series of statements that are worth reflecting upon:

  1. watching silent films is more rewarding than watching contemporary Hollywood action blockbusters
  2. watching a silent film requires more work on the part of the viewer
  3. slower edited films require greater concentration than rapidly edited films

You can view the video of the interview here. The text on this web page includes the following sentence:

Bryony Dixon, a silent film expert from the BFI, told BBC News that because silent films require more concentration, the rewards of watching them are richer than action blockbusters.

So let’s take these three statements in turn:

1. Watching silent films are more rewarding that watching contemporary Hollywood films

I am aware of no research that compares the viewing pleasures derived from silent films to sound films, and I have not been able to find any such research. In fact, what viewers find rewarding about the film experience is an under-researched area of film studies. If anyone knows of any research in this area please feel free to add a comment to this post listing the appropriate references.

This is just Dixon’s opinion, and we should not be surprised that an expert on silent films should prefer silent films. Other people will have their own opinions, tastes, and preferences. The difference is that other people will not have the opportunity to express them in the BBC under the heading ‘Expert on the rewards of silent film.’ This is problematic because it presents Dixon’s opinion as fact (‘An expert says …’). This may be the fault of the BBC and the way it has presented the interview, but from watching the video I doubt it.

Of course, a factor here  is that there has not been much in the way of silent film since 1930 and so research on what viewers think about silent films has inevitably been extremely limited. The Artist provides an excellent opportunity for researchers to engage with this topic.

2.Watching a silent film requires the viewer to work harder

There is no research that I can find looking at the cognitive load of silent cinema (probably for reasons noted above), and the literature on cognitive load in film viewing is somewhat limited in general.  An interesting place to start is this paper from Nitzan Ben-Shaul:

Ben-Shaul N 2003 Split attention problems in interactive moving audiovisual texts, Fifth International Digital Arts and Culture Conference, Melbourne, Australia, 19-23 May, 2003.

It is also worth reading Julian Hochberg and Virginia Brooks’s work on film viewing and visual momentum as it gives a general description of how observers attend to images (both moving and still) and how we cognitively process this information:

Hochberg J and Brooks V 1978 Film cutting and visual momentum, in JW Senders, DF Fisher, and RA Monty (eds.) Eye-movements and the Higher Psychological Functions. Hillsdale, NJ: Erlbaum: 293-313.

Hochberg J and Brooks V 1996 Movies in the mind’s eye, in D Bordwell and N Carroll (eds.) Post-Theory: Reconstructing Film Studies. Madison, WI: UNiversity of Wisconsin Press: 368-387.

Cognitive load theory (CLT) might support the opposite conclusion to Dixon’s assertion. According to CLT, we have only a limited amount of working memory and the cognitive load of a task is determined by the number and complexity of the steps involved that use up those resources. The following example is from Gutashaw WE and Brigham FJ 2005 Instructional support employing spatial abilities: using complimentary cognitive pathways to support learning in students with achievement deficits, in TE Scruggs, MA Mastropieri (eds.) Cognition and Learning in Diverse Settings: Amsterdam: Elsevier: 47-70.

Watching a film in a language one does not understand but with subtitles is an example of an increased cognitive load over watching the same film in one’s own language. Now image watching a subtitled film with poor reading skills. The cognitive load increases dramatically (66).

Thinking along similar lines, we might think that because we do not have to attend to dialogue as well as images that the cognitive load in watching a silent film is lower than that when watching a film with synchronised dialogue that requires attention to multiple sensory modalities.

There has been no direct research on cognitive load that could answer this question, and so I make this argument as a hypothesis only, but as we see in relation to the next point the evidence indicates it is faster editing that increases the cognitive load on the viewer.

Cognitive load theory does play an important role in the media theory of Richard Mayer and Roxana Moreno, and you can find an introduction their research here: Mayer RE and Moreno R 1998 A cognitive theory of multimedia learning: implications for design principles, ACM SIGCHI Conference on Human Factors in Computing Systems, 18-23 April 1998, Los Angeles.

Dixon’s statement sound plausible, but without supporting research it is nothing more than a hypothesis and there are other hypotheses to be made and tested on this point. Of course, it may be that I just haven’t been looking for research in the right places and so if anyone knows of research demonstrating if this statement is true or not then please let me know.

3. Films edited more slowly require more concentration than rapidly cut films

There are a couple of things to consider here. First, contemporary film audiences are less likely to be familiar with silent films than they are with modern action blockbusters. Therefore, they may concentrate  more on something unfamiliar than something commonplace and this would account for a difference in viewers’ experience. We may find that with increasing experience viewing habits may change so that viewers familiar with both silent films and contemporary cinema watch them in the same way. Again, this relates to the cognitive load placed in the viewer. This sounds plausible, but as noted above I have been able to find no research in this topic. In fact I can find no research on viewers’ ‘concentration’ in the cinema, and this leads us to our second problem: what is meant by ‘concentration?’ Dixon never defines the terms she uses, and it may mean the number of times a viewer looks at the screen, the length of time the viewer looks at a screen, the focus of the viewer’s attention when looking at the screen, etc.

If we take concentration to mean something similar to attention, then there is some research on this topic and it contradicts Dixon’s assertion that slower films require more concentration than fast edited films. Research on the limited capacity model of viewership has shown that rapid pacing in motion pictures requires increased allocation of perceptual resources. The research can be read in this paper:

Lang A, Bolls P, Potter RF, and Kawahara K 1999 The effects of production pacing and arousing content on the information processing of television messages, Journal of Broadcasting and Electronic Media 43 (4): 451-475.

The limited capacity model defines the viewer as an information processor faced with a variably redundant ongoing stream of audio-visual information, in the message content is the topic, genre, and information contained in a message. Therefore, ‘viewing is the continuous allocation of a limited pool of processing resources to the cognitive process required for viewers to make sense of a message.’ (I don’t like this definition of message content – it seems somewhat circular to me).

This research looked at the effect of production pacing and content on attention in the cinema, testing the hypothesis that both pacing and arousing content should increase the level of resources automatically allocated to processing the message. The results showed this is indeed the case: arousing content and fast pace increased self-reported arousal in television viewers, and that both factors increase the allocation of resources to processing messages.

This is also discussed in a subsequent paper (below), which showed that faster pacing resulted in the allocation of greater resources by viewers in attending to a television message and that self-reported arousal also increased with editing pace.

Lang A, Zhou S, Schwartz N, Bolls PD, and Potter RF 2000 The effects of edits on arousal, attention, and memory for television messages: when an edit is an edit can an edit be too much?, Journal of Broadcasting and Electronic Media 44 (1): 94-109.

In summary, there is no evidence that slower films require greater concentration by film viewers but there is evidence that faster paced films – such as (but obviously not limited to) action blockbusters – do elicit greater allocation of information processing resources (including attention).

A final point to make is that we do not yet know what the distribution of shot lengths in The Artist, and so comparing its pace to other films is not yet possible. It will be interesting when the film comes out on DVD and we can look at it frame-by-frame to see whether its editing style is compatible with contemporary cinema or with silent films of the 1920s. However, as yet we cannot make any empirical statement about the contribution of editing to the pace of this film.

Dixon’s comments raise some interesting questions about the nature of film viewing and silent cinema, but in the absence of supporting evidence they are opinions and not facts. The danger comes when we accept the former as the latter without asking questions or referring to the existing research in this area. Empirical research leads us to reject incorrect and empty opinions by establishing the nature of those facts. This is what film studies is supposed to be for.

Time series analysis of ITV news bulletins

Back in the summer I wrote a post looking at the relationship between the discourse structure and the formal structure of BBC news bulletins (see here). This week I have the first draft of a similar paper looking at news bulletins from ITV.

The pdf file can be accessed here: Nick Redfern – Time series analysis of ITV news bulletins

Abstract

We analyze shot length data from the three main daily news bulletins broadcast on ITV 1 from 8 August 2011 to 12 August 2011, inclusive. In particular, we are interested to compare the distribution of shot lengths of bulletins broadcast on different days and at different times across this time period, and to examine the time series structure by identifying clusters of shots of shorter and longer duration in order to understand the relationship between this aspect of the formal structure to the discourse structure of these broadcasts. The discourse structure of the bulletins in this sample is fixed, and remains constant irrespective of the subject of news items themselves suggesting that content is adapted to meet the needs of this structure. The statistical results show that neither the day nor the time of broadcast has any impact on the distribution of shot lengths, and the editing style is consistent across the whole sample. There is no common pattern to the time series of these bulletins, but there are some consistent features in the time series for these bulletins: clusters of longer takes are associated with static shots of people talking on-screen, while clusters of shorter takes occur with montage sequences, sports reports, series of news items, and footage from non-ITN sources. Consequently, the presence and order of discourse elements in a bulletin shapes its formal structure.

The data for the bulletins used in this study can be accessed as an Excel 2007 file here: Nick Redfern – ITV News Bulletins

I’m a little wary of making direct comparisons between this data and that of the BBC news bulletins as they are separated by three months and deal with news presentation in very different circumstances. The data used in the ITV study covers the week of the riots in the UK this August, and this presents a very different news cycle to that seen in the BBC data from April. However, some general points can be made:

  • In both samples clusters of longer shots are associated with people speaking at length on camera, and these shots are framed in the same way.
  • In both samples clusters of shorter shots are often associated with montage sequences accompanied by a description from an off-screen reporter or with footage that is derived from other sources (e.g. library footage, other broadcasters).
  • In both samples, there is no evidence of any trends or cycles in the time series.
  • There is no significant difference in the median shot lengths and dispersion of shot lengths in the two samples of bulletins (BUT remember these are from different times of the year, so this information is only of limited use).
  • Day and time of broadcast have no impact on news bulletins for either broadcaster (but again the comparison is not as direct as I would like).

Overall, there is some evidence that news bulletins are stylistically homogenous across these broadcasters. I will do another study looking at the comparing the bulletins from the both the BBC and ITV from a single week, but this will have to wait for another day.

Analysing film texts

The statistical analysis of literary style was initiated by Augustus De Morgan in 1851, when he observed that ‘I should expect to find that one man writing on two different subjects agrees more nearly with himself than two different men writing on the same subject’ and suggested that average word length word be an appropriate indicator of style.This was followed up by TC Mendenhall, who analysed the works of William Shakespeare and Sir Francis Bacon by looking at the frequency distributions of word lengths.

It may seem that focussing on literary style will be of little use when dealing with films, but there is a body of research that examines film scripts and audio descriptions in order to understand the structure of narrative cinema. This post presents links to some of this material. I had intended to include this research in some of the earlier posts on empirical studies of film style, but it never quite seemed to fit (and I may have forgotten on more than one occasion). Besides it deserves a post of its own.

The best place to start is probably Andrew Vassiliou’s Ph.D thesis:

Vassiliou A 2006 Analysing Film Content: A Text Based Approach. University of Surrey, unpublished Ph.D thesis.

The aim of this work is to bridge the semantic gap with respect to the analysis of film content. Our novel approach is to systematically exploit collateral texts for films, such as audio description scripts and screenplays. We ask three questions: first, what information do these texts provide about film content and how do they express it? Second, how can machine-processable representations of film content be extracted automatically in these texts? Third, how can these representations enable novel applications for analysing and accessing digital film data? To answer these questions we have analysed collocations in corpora of audio description scripts (AD) and screenplays (SC), developed and evaluated an information extraction system and outlined novel applications based on information extracted from AD and SC scripts.

We found that the language used in AD and SC contains idiosyncratic repeating word patterns, compared to general language. The existence of these idiosyncrasies means that the generation of information extraction templates and algorithms can be mainly automatic. We also found four types of event that are commonly described in audio description scripts and screenplays for Hollywood films: Focus_of_Attention, Change_of_Location, Non-verbal_Communication and Scene_Change events. We argue that information about these events will support novel applications for automatic film content analysis. These findings form our main contributions. Another contribution of this work is the extension and testing of an existing, mainly-automated method to generate templates and algorithms for information extraction; with no further modifications, these performed with around 55% precision and 35% recall. Also provided is a database containing information about four types of events in 193 films, which was extracted automatically. Taken as a whole, this work can be considered to contribute a new framework for analysing film content which synthesises elements of corpus linguistics, information extraction, narratology and film theory.

These papers present different aspects of the approach, using written texts to distinguish between film genres, to explore the clustering of narrative events, and the emotion responses of viewers.

Salway A, Lehane B, and O’Connor NE 2007 Associating characters with events in films, 6th ACM International Conference on Image and Video Retrieval, 9-11 July 2007, Amsterdam.

The work presented here combines the analysis of a film’s audiovisual features with the analysis of an accompanying audio description. Specifically, we describe a technique for semantic-based indexing of feature films that associates character names with meaningful events. The technique fuses the results of event detection based on audiovisual features with the inferred on-screen presence of characters, based on an analysis of an audio description script. In an evaluation with 215 events from 11 films, the technique performed the character detection task with Precision = 93% and Recall = 71%. We then go on to show how novel access modes to film content are enabled by our analysis. The specific examples illustrated include video retrieval via a combination of event-type and character name and our first steps towards visualization of narrative and character interplay based on characters occurrence and co-occurrence in events.

Salway A, Vassiliou A, and Ahmad K 2005 What happens in films?, IEEE International Conference on Multimedia and Expo, 6-8 July 2005, Amsterdam.

This paper aims to contribute to the analysis and description of semantic video content by investigating what actions are important in films. We apply a corpus analysis method to identify frequently occurring phrases in texts that describe films – screenplays and audio description. Frequent words and statistically significant collocations of these words are identified in screenplays of 75 films and in audio description of 45 films. Phrases such as `looks at’, `turns to’, `smiles at’ and various collocations of `door’ were found to be common. We argue that these phrases occur frequently because they describe actions that are important story-telling elements for filmed narrative. We discuss how this knowledge helps the development of systems to structure semantic video content.

Vassiliou A, Salway A, and Pitt D 2004 Formalizing stories: sequences of events and state changes, IEEE International Conference on Multimedia and Expo, 27-30 June 2004, Taipei, Taiwan.

An attempt is made here to synthesise ideas from theories of narrative and computer science in order to model high level semantic video content, especially for films. A notation is proposed for describing sequences of interrelated events and states in narratives. The investigation focuses on the idea of modelling video content as a sequence of states: sequences of characters’ emotional states are considered as a case study. An existing method for extracting information about emotion in film is formalised and extended with a metric to compare the distribution of emotions in two films.

Finally, a PowerPoint presentation by Andrew Salway that covers the topic fairly extensively can be accessed here.

Editing in Slumber Party Massacre (1982)

A few weeks ago I posted the order structure matrix of Halloween (1978), which can be accessed here. The overall editing structure of this film showed that the last portion when Michael attacks Laurie – the final girl – was edited in a different fashion to the rest of the film. There was also some evidence of clustering of shorter shots when michael is stabbing people to death and of longer takes when adult male characters are on screen.

To see of these features are common across the genre of slasher films, this week we have the editing structure of Slumber Party Massacre (1982), directed by Amy Ryan. The data include the opening credits as these are presented over narratively important scenes, but the closing credits are not included. The data can be accessed as an Excel file here: Redfern – Slumber Party. The order structure matrix is presented below.

Figure 1 The order structure matrix of Slumber Party Massacre

We can immediately see from Figure 1 that a similar pattern to Halloween is evident, with the ‘final girl’ sequence that begins at shot 706, when the killer – Russ Thorn – chases the girls outside and they battle to the death next to the swimming pool. The black column that can be seen just after shot 750 occurs when the supposedly dead Thorn rises from the pool to attack for the last time. This moment comprises only a few shots, but they are much longer than those in the action that surrounds them (4-10 seconds),

Generally, the editing is slower in the first half of the film and becomes quicker as the killing spree becomes more intense, but we can see some clusters of short shots in the early part of the film. At shot 165, we have a ‘false killing:’ Thorn is using a drill to murder his victims, so when we see a drill coming through a door towards the head of the basketball coach we assume that she is the next in line, but it turns out that it is just someone installing a peephole in the door (below).


Although there have been a couple of early murders in the film, the killing really begins in earnest from shot 392, when the head of Brenda’s boyfriend comes off, and it is from this point that we start to white spaces in the matrix indicating that these shots tend to be shorter than those that precede them. The virtuoso piece of filmmaking in Slumber Party Massacre is the cross-cutting between the murder of one of the boys at the party and Valerie watching a slasher movie on television. This sequence lasts only ~104 seconds but comprises 42 shots (from shot 462), and is edited much more quickly than the scenes that precede and follow it. It is typical of this film that fast editing is associated with scenes of intense violence.

Clusters of long takes are also evident at various points in the film. Notably, there is a solid black column at shot 72 which begins a sequence featuring the main female characters in the shower after a basketball game (which is the cluster of short shots from shot 46 to shot 70). A similar concentration of longer shots can be seen at from shot 267, which is the sequence where the girls get undressed at the beginning of the slumber party. Nudity is thus edited more slowly than other scenes in the film.

Although the killing at the party is well under way by this point, we can see that things are edited more slowly in shots 503 to 565. This sequence lasts for just 10 minutes and focusses on Valerie and her worries that something strange is happening next door, the girls at the party trying to make themselves safe, and Thorn hiding the bodies of those who have so far been unfortunate. We have numerous shots of Valerie searching the grounds and the house, trying to find out what is happening; while the girls inside the house are preparing for Thorn’s next attack. These scenes include many tracking shots that tend not to be evident at other more ‘stabby’ points in the film (pun intended). Like the example mentioned above, when Thorn rises slowly from the swimming pool, this slow sequence is associated with the creation of a sense of dread prior to the big finale. This can be interpreted as evidence that two different types of horror are present in such films: the ‘body horror’ of the violence and the creeping dread of what might be in the darkness, and that these are associated with two different editing regimes. It will of course require a larger sample of films to establish this, but the order structure matrix appears to be quite capable of picking out these different types of sequences.

Overall, the editing structure of Slumber Party Massacre is comprised of clusters of shorter shots associated with the violence of the penetrative killings and longer shots used for nudity and to create atmosphere, and is generally similar to Halloween.