Blog Archives

Some notes on cinemetrics V

This post addresses some issues raised by Mike Baxter as part of the ‘cinemetrics conversation’ at the Cinemetrics website (and is the post I would have produced last week had I been able to remember which bit of software had the right command to create the necessary graph). You can find an introduction to the conversation here and my first response to some of the issues raised here.

I want to address two issues: first, the nature of outliers in shot length distributions and better methods of representing such distributions than I have used up to now; and, second, the straw-man the median shot length has become in Baxter’s comments.

Baxter’s comments in response to the earlier can be found in the second tab under his name here. In section 2 Baxter questions my use of the term ‘outlier’ and the definition used to identify such shots.  This is fair enough – we wouldn’t get very far if such definitions weren’t questioned. In the examples of Lights of New York and Scarlett Empress, Baxter argues there is no evidence of outliers since

it’s difficult to identify any point at which ‘extremes’ begin, or discontinuities in the distribution of the kind I think are needed to assert, with any confidence, that you are dealing with ‘outliers.’

Baxter never defines what such a discontinuity would look like and so his argument is vague. (Arguably this is the semantic version of a slippery slope).

Figure 1 shows the kernel density and box plot of Lights of New York. There is a 12.2 second gap between the five shots of longest duration and the sixth longest, presumably the sort of discontinuity Baxter refers to and he does concede he might be prepared to accept five shot lengths as extreme values (though he does not say on what basis). From Figure 1 we can see there are in fact several such discontinuities, and that the kernel density is zero at several points in the upper tail (indicating the kernels do not overlap), particularly above 30 seconds (which corresponds to the 22 extreme outliers identified using this type of box plot). However, a limitation of this boxplot is that it does not take into account the skew of the distribution and so over identification of outliers is a problem.

Figure 1 Kernel density and boxplot of shot lengths in Lights of New York (1928)

Figure 2 presents the same data using an adjusted boxplot that takes into account the skewed nature of the data. This method uses the med-couple, a robust measure of skewness, to identify outliers. The adjusted boxplot can be generated using the adjbox() command in the R package robustbase.

The number of outliers in Figure 2 is much less than in the original boxplot: in the upper tail 10 shots greater than 55 seconds are identified as outliers (or 3% of the total).  Nonetheless, there are still some values which are sufficiently removed from the rest of the data to be classed as outliers even when accounting for the asymmetry of the distribution. Whether or not Baxter would accept this definition would depend on the interpretation of his use of the term ‘discontinuity,’ which he does not define.

Surprisingly, this method identifies three outliers in the lower tail of the distribution (which I wasn’t expecting and will have to think about more).

Figure 2 Kernel density and adjusted boxplot of shot lengths in Lights of New York (1928)

The following article describes the adjusted boxplot and its calculation:

Vandervieren E and Hubert M 2008 An adjusted boxplot for skewed distributions, Computational Statistics and Data Analysis 52 (12): 5186-5201. An ungated, earlier version of this paper can be accessed here.

Even if we accept Baxter’s argument that there are no outliers in Lights of New York it remains necessary to be aware of the problems caused by outliers in data sets and to check the distribution of shot lengths so that we  are not be fooled by non-robust statistics. Certainly more effort will have to be devoted to defining what is or is not an outlier (in either statistical or filmic terms) in research if this type. (But it is much easier when you remember which bit of software to use).

Finally, I wish to address a misrepresentation that has taken a hold at this early stage in the ‘cinemetrics conversation.’

Baxter writes

the use of either the ASL or median as the statistic for attempting to summarise ‘style’ doesn’t make much sense (as Salt observes) [original emphasis].

This argument is a straw-man.

I have never stated that the median shot length is the statistic for describing film style. I have argued that the median shot length is better than the mean shot length for describing film style, and should therefore be preferred for the following reasons:

  • the median is conceptually simple and easy to calculate, and is certainly no more difficult than the mean.
  • the median shot length has a clearly defined meaning and the difference between two median shot lengths is also meaningful, whereas the meaning of the mean the difference between two mean shot lengths is not clear in either case (and seem to change every time I raise an objection against them).
  • the median shot length is not affected by a monotone transformation (the median of a data set is the same as the median of the logarithmic transformation of a data set), while the possibilities for confusing the arithmetic and geometric means are endless.
  • the median locates the centre of a distribution irrespective of its shape, whereas this is not true of the mean.
  • the median is not affected by outliers or extreme values (however you choose to define them), whereas this is not true of the mean.
  • interpretations of film style based on the median shot length are consistent with graphical methods and (it turns out) with dominance statistics (Cliff’s d, HLΔ), while those based on the mean shot length are not.

But I have always argued that it is important use a range of statistical methods to get a full understanding of the nature of film style.

As far as I am aware I am the only person writing about film style to even consider the dispersion of shot lengths in a motion picture and the appropriate methods to use this. I am also the only person to use a range of graphical methods (probability plots, boxplots, empirical cumulative distribution functions, kernel densities, order structure matrices, running Mann-Whitney Z statistics, rank-frequency plots) to describe film style. I am the only person in film studies to employ confidence intervals, statistical hypothesis tests, effect sizes, or even to describe the methodologies I use in studying film style. (Others working outside films studies in disciplines where quantitative methods are commonplace also use such tools as a matter of routine, and those within film studies would do well learn by their example).

I am also the only person who has attempted to describe these methods so that others may try to analyse film style for themselves. I am the only person who has brought to the attention of researchers in film studies the availability of free learning resources and software for statistics. I am the only person to look outside film studies for empirical research on film style and to bring it to the attention of film scholars. I am the only person to address the issue of statistical literacy in film studies (here and here).

Baxter writes that

the accessibilty of computational power, and essential simplicity of important statistical ideas (however mathematically complex) is a hobby-horse of sorts.

I am glad to hear this, because it means that if someone else is prepared to devote some time and effort to explaining statistical concepts and methods to film scholars then I won’t have to do it on my own.

However, as Baxter presents the argument I am interested in the median shot length only while Barry Salt apparently does not have a narrow attachment to a particular statistic of film style and embraces a pluralistic approach. However, I am not aware of any forum in which Salt has made any concession to his view that the mean shot length is the only appropriate statistic of film style. In fact, I am unaware of any other statistics of film style used by Salt besides the average shot length and the histogram (while his odd comments on the calculation of kernel density estimates indicates he may not properly understand other methods).

Baxter has his argument back to front here: you won’t find methodological ecumenism in the statistical analysis of film style in the work of Barry Salt.

Advertisements

Using kernel densities to analyse film style

1. Introduction

Since a film typically comprises several hundred (if not thousands) of shots describing its style clearly and concisely can be challenging. This is further complicated by the fact that editing patterns change over the course of a film. Numerical summaries are useful but limited in the amount of information they can convey about the style of a film, and while two films may have the same median shot length or interquartile range they may have very different editing patterns. Numerical summaries are useful for describing the whole of a data set but are less effective when it comes to accounting for changes in style over time. These problems may be overcome by using graphical as well as numerical summaries to communicate large amounts of information quickly and simply. Graphs also fulfil an analytical role, providing insights into a data set and revealing its structure. A good graph not only allows the reader to see what is important about a data set the writer wishes to convey, but also enables the researcher to discover what is important in the first place.

It should be common practice in the statistical analysis of film style to include graphical summaries of film style (though this is rarely the case), and there are several different types of simple graphs that can be used. These include cumulative distribution functions, box-plots, vioplots, and time-ordered displays such as run charts and order structure matrices. In this post I describe two different uses of kernel density estimation as graphical methods for analysing film style. The next section introduces the basics of kernel density estimation. Section three discusses the use of kernel densities to describe and compare shot length distributions, while section four applies kernel densities to the point process of two RKO musicals to describe and compare how cutting rates change over time.

2. Kernel Density Estimation

The kernel density is a nonparametric estimate of the probability density function of a data set, and shows us the range of the data, the presence of any outliers, the symmetry of the distribution (or lack thereof), the shape of the peak, and the modality of the data (Silverman 1986; Sheather 2004). A kernel density thus performs the same functions as histogram but is able to overcome some of the limitations of the latter. Since no assumptions are required about the functional form of the data kernel densities are a useful graphical method for exploratory data analysis (Behrens & Yu 2003). The purpose of exploratory data analysis is to reveal interesting and potentially inexplicable patterns in data so that we can answer the general question ‘what is going on here?’ Kernel densities allows us to this by describing the relative likelihood a shot in a film will take on a particular value, or by allowing us to see how the density of shots in a film changes over time.

The kernel density is estimated by summing the kernel functions superimposed on the data at every value on the 𝑥x-axis. This means that we fit a symmetrical function (the kernel) over each individual data point and then add together the values of the kernels so that the contribution of some data point xi to the density at x depends on how far it lies from x. The kernel density estimator is

where n is the sample size, h is a smoothing parameter called the bandwidth, and K is the kernel function. There are several choices for K (Gaussian, Epanechnikov, triangular, etc.) though the choice of kernel is relatively unimportant, and it is the choice of the bandwidth that determines the shape of the density since this value controls the width of the kernel. If the bandwidth is too narrow the estimate will contain lots of spikes and the noise of the data will obscure its structure. Conversely, if the bandwidth is too wide the estimate will be over-smoothed and this will again obscure the structure of the data. The kernel density estimate is an improvement on the use of histograms to represent the density of a data set since the estimate is smooth and does not depend on the end-points of the bins, although a shared limitation is the dependence on the choice of the bandwidth. Another advantage of the kernel density is that two or more densities can be overlaid on the same chart for ease of comparison whereas this is not possible with a histogram.

Figure 1 illustrates this process for Deduce, You Say (Chuck Jones, 1956), in which the density shows how the shot lengths of this film are distributed. Beneath the density we see a 1-D scatter plot in which each line indicates the length of a shot in this film (xi), with several shots having identical values. The Gaussian kernels fitted over each data point are shown in red and the density at any point on the x-axis is equal to the sum of the kernel functions at that point. The closer the data points are to one another the more the individual kernels overlap and the greater the sum of the kernels – and therefore the greater the density – at that point.

All widely available statistical software packages produce kernel density estimates for a data set. An online module for calculating kernel densities can be found here.

3. Describing and comparing shot length distributions

A shot length distribution is a description of the data set created for a film by recording the length of each shot in seconds. Analysing the distribution of shot lengths in a motion picture allows us to answer questions such as ‘is this film edited quickly or slowly?’ and ‘does this film use a narrow or a broad range of different shot lengths?’ Comparing the shot length distributions of two or more films allows us to determine if they have similar styles: is film A edited more quickly than film B and does it exhibits more or less variation in its use of shot lengths? A kernel density estimate provides a simple method for answering these questions.

From the kernel density of Deduce, You Say in Figure 1 we see the distribution of shot lengths is asymmetrical with the majority of shots less than 10 seconds long. There is a small cluster of shots around 15 seconds in length, and there are three outliers greater than 20 seconds. From just a cursory glance at Figure 1 we can thus obtain a lot of information very quickly that can then guide our subsequent analysis. for example, we might ask what events are associated with the longer takes in this film?

Figure 1 The kernel density estimate of shot lengths in Deduce, You Say (Chuck Jones, 1956) showing the kernel functions fitted to each data point (N = 58, Bandwidth = 1.356)

Suppose we wanted to compare the shot length distributions of two films. Figure 2 shows the kernel density estimates of the Laurel and Hardy shorts Early to Bed (1928) and Perfect Day (1929). It is immediately that clear though both distributions are positively skewed, the shot length distributions of these two films are very different. The density of shot lengths for Early to Bed covers a narrow range of shot lengths while that for Perfect Day is spread out over a wide range of shot lengths. The high density at ~2 seconds for Early to Bed shows that the majority of shots in this film are concentrated at lower end of the distribution with few shots longer than 10 seconds, while the lower peak for Perfect Day shows there is no similar concentration of shots of shorter duration and the shot lengths are spread out across a wide range (from 20 to 50.2 seconds) in the upper tail of the distribution. We can conclude that Early to Bed is edited more quickly than Perfect Day and that it shot lengths exhibit less variation; and though we could have come to these same conclusions using numerical summaries alone the comparison is clearer and more intuitive when represented visually.

Figure 2 Kernel density estimates shot lengths in Early to Bed (1928) and Perfect Day (1929)

4. Time series analysis using kernel densities

Film form evolves over time and we can use kernel density estimation to describe the cutting rate of a film. Rather than focussing on the length of a shot (L) as the time elapsed between two cuts, we are interested in the timing of the cuts (C) themselves. There is a one-to-one correspondence between cuts and shot lengths, and the time at which the jth cut occurs is equal to the sum of the lengths of the prior shots:

Figure 3 shows the one-to-one nature of this relationship clearly.

Figure 3 The one-to-one relationship between shot lengths (Li) and the timing of a cut (Cj)

Analysis of the cutting rate requires us to think of the editing of a film as a simple point process (Jacobsen 2006). A point process is a stochastic process whose realizations comprise a set of point events in time, which for a motion picture is simply the set of times at which the cuts occur. We apply the same method used above to the point process to produce a density estimate of the time series. Just as the density in the above examples is greatest when shot lengths are closer together, the density is greatest when one shot quickly follows another and, therefore, the shorter the shot lengths are at that point in the film. Conversely, low densities indicate shots of longer duration as consecutive shots will be distant from one another on the x-axis. This is similar to the use of peri-stimulus time histograms and kernel methods in neurophysiology to visualize the firing rate and timing of neuronal spike discharges (see Shimazaki & Shinamoto 2010).

Using kernel density estimation to understand the cutting rate of a film as a point process is advantageous since it requires no assumptions about the nature of the process. Salt (1974) suggested using Poisson distributions as a model of editing as a point process described by the rate parameter λ, but this method is unrealistic since homogenous Poisson point processes are useful only for applications involving temporal uniformity (Streit 2010: 1). For a motion picture the probability distribution of a cut occurring at any point in time is not independent of previous cuts, and the time series will often be non-stationary over the course of a film while also demonstrating acceleration and deceleration of the cutting rate because different types of sequences characterised by different editing regimes. We expect to see clusters of long and short takes in a motion picture and so the assumption of a Poisson process will not be appropriate, while the presence of any trends will mean that the process does not satisfy stationarity. Modelling the cutting rate as an inhomogeneous Poisson point process by allowing λ to vary as function of time may solve some – though not necessarily all – of these problems.

To illustrate the use of kernel densities in time series analysis we compare the editing of two films tow feature Fred Astaire and Ginger Rogers: Top Hat (1935) and Shall We Dance (1937). In order to make a direct comparison between the evolution of the cutting rates the running time of each film was normalised to a unit length by dividing each shot length by the total running time. In this case we treat slow transitions (e.g. fades, dissolves, etc) as cuts, with the cut between two shots marked at the approximate midpoint of the transition. Figure 4 shows the resulting densities.

From the plot in Figure 4 for Top Hat we see the density for this film comprises a series of peaks and troughs, but that there is no overall trend . The low densities in this graph are associated with the musical numbers, while the high densities occur with scenes based around the rapid dialogue between Astaire and Rogers. (See here for alternative time series analyses of Top Hat that use different methods but arrive at the same conclusions as those below).

The first musical number is ‘No Strings (I’m Fancy Free)’, which begins at ~0.07. Astaire is then interrupted when Rogers storms upstairs to complain about the racket, and we have a scene between the two in which both the dialogue and the editing are rapid. This occurs at the peak at ~0.11 to ~0.13, and is then followed by a reprise of ‘No Strings,’ which is again shot as a long takes. The next section of the film follows on the next day as Astaire takes on the role of a London cabby and drives Rogers across town and as before this dialogue scene is quickly edited resulting in a high density of shots at ~0.19. This sequence finishes with ‘Isn’t This a Lovely Day (to be Caught in the Rain),’ which accounts for the low density of shots at ~0.21 to ~0.27 since this number again comprises long takes. The rapid cutting rate during dialogue scenes is repeated when Rogers mistakes Astaire for a married man at the hotel, and is again followed by the low density of a slow cutting rate for the scenes between Astaire and Edward Everett Horton at the theatre and the number ‘Top Hat, White Tie and Tails’ at ~0.4. After this number the action moves to Italy and there is much less variation in the density of shots in the first part of these scenes, which are focussed on dialogue and narrative. There is no big musical number until ‘Cheek to Cheek’ and this sequence accounts for the low density seen at ~0.66, being made up of just 13 shots that run to 435.7 seconds. The density increases again as we move back to narrative and dialogue until we get to the sequence between in which Horton explains the mix-up over who is married and who is not to the policeman and ‘The Piccolino’ which begins at ~0.89 and runs until ~0.96.

The density plot of the point process for Shall We Dance differs from that of Top Hat showing a trend over the running time of the film from higher to lower densities of shots, indicating the cutting rate in this film slows over the course of the film. Nonetheless we see the same pattern of troughs and peaks, and as in Top Hat these are associated with musicals and comedy scenes, respectively.

This film features numerous short dancing excerpts in its early scenes, but there is no large scale musical number until well into the picture. In fact, these early scenes are mostly about stopping Astaire dancing (e.g. when Horton keeps turning off the record), and the dialogue scenes that establish the confusion over Astaire’s married status as the ship departs France. These scenes are based around a similar narrative device to that used in Top Hat and are again edited quickly. The first big number in the film is ‘Slap that Bass’ and coincides with the low density section of the film beginning at ~0.17, indicating that this part of the film is edited more slowly that the first section. The cutting rate slowly increases until ~0.37, and this section includes the ‘Walking the Dog’ and ‘I’ve Got Beginner’s Luck’ numbers but is mostly made up of dialogue scenes between Astaire and Rogers. After this point the film exhibits a trend from higher to lower densities and there are a number of smaller cycles present between 0.37 and 0.64. This section includes the numbers and ‘They All Laughed (at Christopher Columbus)’ and the subsequent dance routine, which begins at ~0.48 and includes the trough at ~0.54. The low density section beginning at 0.64 is the scene between Astaire and Rogers in which they try to avoid reporters in the park, and comprises a number of lengthy dialogue shots and the film’s most famous number, ‘Let’s Call the Whole Thing Off.’ The editing then picks up during the dialogue scenes until we reach the next drop in the density at ~0.74 which coincides with the scenes on the ferry to Manhattan as Astaire sings ‘They Can’t Take That Away From Me.’ The next low density section begins at ~0.9, and is the big production at the end of the film with the distant framing and static camera completing the long takes in showing off the ‘Hoctor’s Ballet’ sequence, which then gives way to a more rapidly cut section featuring numerous cut-ways from the dancers to Rogers’ arriving at the theatre with the court order for Astaire only to discover him on stage with dancers wearing masks of her face. The cutting rate then slows once more as Rogers insinuates herself into the ‘Shall We Dance’ routine and the film reaches its finale.

Figure 4 Kernel density estimates of the point processes for two RKO musicals with normalised running times

Comparing the two plots we note some of the low density periods coincide with one another. This is most clearly the case at around 0.2 and 0.64 in both films. The major numbers that end the films also occur at similar points in the narratives. This indicates that a musical number occurs at approximately the same points in both films even though the two films have different running times (Top Hat: 5819.9s, Shall We Dance: 6371.4s). This raises some interesting questions regarding the structure of other musicals featuring Astaire and Rogers. Is there always a musical number about a fifth of the way into an RKO musical featuring this pair? Is there always a major number about two-thirds the way through picture? And does the finale always occupy the last 10 per cent of the picture? Answers to these questions will have to wait until I finish transcribing all the films Astaire and Rogers made for RKO in the 1930s.

5. Conclusion

Kernel density estimation is a simple method for analysing the style of motion pictures, and the wide availability of statistical packages makes the use of kernel densities easy to incorporate into empirical research. Since it requires no prior assumptions about the distribution of the data this method is appropriate for exploratory data analysis. In this paper we demonstrated the how this method may be used to describe and compare the shot length distributions of motion pictures and for the time series analysis of film style.

References

Behrens JT and Yu C-H 2003 Exploratory data analysis, in JA Schinka and WF Velicer (eds.) Handbook of Psychology: Volume 2 – Research methods in Psychology. Hoboken, NJ: John Wiley & Sons: 33-64.

Jacobsen M 2006 Point Process Theory and Applications: Marked Point and Piecewise Deterministic Processes. New York: Birkhauser.

Salt B 1974 Statistical style analysis of motion pictures, Film Quarterly 28 (1): 13-22.

Sheather SJ 2004 Density estimation, Statistical Science 19 (4): 588-597.

Shimazaki H and Shinamoto S 2010 Kernel bandwidth optimization in spike train estimation, Journal of Computational Neuroscience 29 (1-2): 171-182.

Silverman B 1986 Density Estimation for Statistics and Data Analysis. London: Chapman & Hall.

Streit RL 2010 Poisson Point Processes: Imaging, Tracking, and Sensing. Dordrecht: Springer.