Category Archives: Film Technology

The Veritiphone system

I have previously written three posts on the efforts of the Leeds inventor Claude Hamilton Verity to develop a synchronisation system for motion pictures using a sound-on-disc system. In 1923 he sailed to America to work with the Vitagraph Film Company, though the result of this collaboration remains unknown. His efforts were reported worldwide but he has disappeared from the history of British cinema. You can read my earlier posts here, here, and here.

I had not thought about Verity for many months until Luke McKernan asked me a question yesterday, and I took the opportunity to have a quick search to see if anything new was available.

Rather wonderfully I have just found a discussion at Gramophone Collecting which has images of two articles. One is by Verity himself written for The Sound Wave 1922 describing his ‘Veritiphone’ system complete with a picture of this unusual machine.There is even a picture of the man with his machine. The other is a description of his efforts.

The original discussion can be found here.

The introduction to the article reads:

We have had an opportunity of testing the acclaimed merits of the Veritiphone. This is the invention of Mr. Claude H. Verity, of Leeds, who has made a deep study of the synchronisation of moving pictures, and who has admittedly accomplished what at one time appeared to be an impossible feat, that of timing the movement of the lips of the speaker  with the recorded speech given coincidentally. The Veritiphone is, indeed, the outcome pure and simple of Mr. Verity’s pursuit of the science of synchronisation.

From this we can infer the Veritphone system worked, performing exactly as Verity claimed and as reported around the world. And yet he is utterly unknown to historians of British cinema.

Here are the images from the forum.

The most important American film of the past four decades

The internet is of course a wonderful resource for researcher, providing access to an astonishing array of information. It is also a rabbit hole down which you can disappear for days on end following something that catches your eye. Consequently, this week’s post has absolutely nothing whatsoever to do with what I intended to write about. Instead, I found Loren Carprenter’s Vol Libre (1980) on vimeo and have spent the past week reading about CGI, animation, and fractal geometry. This film marks the birth of CGI rendering in Hollywood filmmaking making it possibly the most influential American film since Bonnie and Clyde (1967) kicked the New Hollywood and Steven Spielberg’s Jaws (1975) changed the way movies are released.

Mathematicians often produce images and even films to illustrate principles and demonstrate what maths can do. Benoit Mandelbrot (1988: 8) discusses making films based on fractal processes as early as 1972; while Richard F. Voss was one of the pioneers of fractal imagery based on his work on 1/f noise (which James Cutting and colleagues have discussed at length in his research on attention and editing in Hollywood cinema).

At the time of Vol Libre Carpenter was employed by Boeing but after premiering the film at a SIGGRAPH conference went to Lucasfilm to work on the ‘Genesis’ sequences for Star Trek II: The Wrath of Khan, eventually becoming one of the co-founders of Pixar Animation Studios and its chief scientist. In an article for The College Mathematics Journal in 1984 Carpenter described creating fractal images for cinema:

The method I use is recursive subdivision, and it has a lot of advantages for the applications that we are dealing with here; that is, extreme perspective, dynamic motion, local control – if I want to put a house over here, I can do it. The subdivision process involves a recursive breaking-up of large triangles into smaller triangles. We can adjust the fineness of the precision that we use. For example, in ‘Star Trek,’ the images were not computed to as fine a resolution as possible because it is an animated sequence and things are going by quickly. You can see little triangles if you look carefully, but most people never saw them.

Loren Carpenter, along with Ed Catmull and Rob Cook, was awarded an Oscar for developing digital rendering systems in 2000.

Vol Libre does not appear very often in the film studies literature even though there are a lot of books on digital cinema. Stephen Prince discusses Vol Libre in Digital Visual Effects in Cinema: The Seduction of Reality (see pages 22-23 and 54-55); and Issac Victor Kerklow mentions it in The Art of 3D Computer Animation and Effects (pages 16-17). A search on Google scholar for “Vol Libre” brings up many articles on digital imagery and the history of computing but nothing from films studies, although Tim Lenoir (2000) mentions Vol Libre in passing in an article on new media in Configurations. This raises the possibility that many film scholar are unaware of and have not seen this important film.

Fortunately, there are many useful resources available.

An article on Pixar, including a discussion on Carpenter’s work, by Tekla S. Perry can be found here.

Two papers co-authored by Carpenter can be found at the Pixar on-line library here.

An interview with Carpenter can be found at Vimeo here.

References

Kerklow IV 2004 The Art of 3D Computer Animation and Effects, third edition. Hoboken, NJ: John Wiley & Sons.

Lenoir T 2000 All but war is simulation: the military-entertainment complex, Configurations 8 (3): 289-335.

Mandelbrot BB 1988 People and events behind The Science of Fractal Images, in H-O Pietgen and D Saupe (eds.) The Science of Fractal Images. New York: Springer: 1-19.

Prince S 2011 Digital Visual Effects in Cinema: The Seduction of Reality. New Brunswick, NJ: Rutgers University Press.

Motion, screen size, and emotion

This week some very interesting papers on how movement and screen size impacts on our experience and understanding of motion pictures. Particularly interesting is the paper that indicates small screens can be more immersive than big screens

Bellman S, Schweda A, and Varan D 2009 Viewing angle matters – screen type does not, Journal of Communication 59 (3): 609-634.

Increasingly, television content is available to viewers across 3 different screen types: TVs, personal computers (PCs), and portable devices such as mobile phones and iPods. The purpose of this study was to see what effect physical and apparent screen size has upon ad effectiveness. Using a sample of 320 members of the Australian public, we found that TV ads can be just as effective on PCs and iPods. However, controlling for screen type, ads viewed from a closer distance (i.e. with a wider viewing angle) were more likely to be recalled the next day, and were associated with more favorable brand attitudes. Shorter programs, product relevance, and use of close-ups and detailed images made no difference to this general viewing-angle effect.

Bracken C and Pettey G 2007 It is REALLY a smaller (and smaller) world: presence and small Screens,  PRESENCE 2007: 10th International Workshop on Presence, Barcelona, Spain, 25-27 October 2007.

This study moved Presence into the realm of the smaller video format—comparing Apple iPod with a standard television presentation. Ninety-six students were exposed to one of two presentations on either an iPod or on a 32-inch television. Students saw either a 10-minute fast-paced (multiple cut) action sequence or a 10-minute slow-paced (long cut) conversation sequence from a feature length motion picture. The 2 x 2 design looked at differences in immersion, spatial presence and social realism. While previous research suggests that larger format presentations should generally result in higher levels of presence, this study found that subjects viewing the iPod reported higher levels of immersion. Social realism had a significant interaction with content/pace, and there was no significant difference between iPod and the 32-inch television in spatial presence.

Detenber BH and Reeves B 1996 A bio‐informational theory of emotion: motion and image size effects on viewers, Journal of Communication 46 (3): 66-84.

Detenber BH, Simons RF, and Bennet Jr GG 1998 Roll ‘em!: the effects of picture motion on emotional responsesJournal of Broadcasting and Electronic Media 42 (1): 113-127.

An experiment investigated the effects of picture motion on individuals’ emotional reactions to images. Subjective measures (self-reports) and physiological data (skin conductance and heart rate) were obtained to provide convergent data on affective responses. Results indicate that picture motion significantly increased arousal, particularly when the image was already arousing. This finding was supported by the both skin conductance and the self-report data. Picture motion also tended to prompt more heart-rate deceleration, most likely reflecting a greater allocation of attention to the more arousing images. In this study, the influence of picture motion on affective valence was evident only in the self-report measures – positive images were experienced as more positive and negative images as more negative when the image contained motion. Implications of the results and suggestions for future research are discussed.

Ravaja N 2004 Effects of image motion on a small screen on emotion, attention, and memory: moving-face versus static-face newscasterJournal of Broadcasting and Electronic Media 48 (1): 108-133.

We examined the modulating influence of a small moving vs. static facial image on emotion- and attention-related subjective and physiological responses to financial news read by a newscaster, and on memory performance among 36 young adults. A moving-face newscaster was associated with high self-reported pleasure and arousal, but not with physiological arousal (electrodermal activity). Facial electromyographic responses to facial image motion were at variance with pleasure ratings. Facial motion was associated with decreased respiratory sinus arrhythmia, an index of attention, and improved memory performance for positive messages. A talking facial image on a small screen increases attention and knowledge acquisition.

Reeves B, Lang A, Kim EY and Tatar D 1999 The effects of screen size and message content on attention and arousal, Media Psychology 1 (1): 49-67.

The number of different screens that people confront is increasing. One potentially important difference in the psychological impact of screen displays is their size; new screens are both larger and smaller than older ones. A between-subjects experiment (n = 38) assessed viewer’s attention and arousal in response to three different size screens (56-inch, 13-inch, and 2-inch picture heights). Viewers responded to video images from television and film that displayed different emotions (# video segments = 60). Attention was measured by heart rate deceleration in response to the onset of pictures, and arousal was measured by skin conductance aggregated during viewing. Results showed that the largest screen produced greater heart rate deceleration than the medium and small screens. The large screen also produced greater skin conductance than the medium and small screens. For skin conductance, screen size also interacted with the emotional content of the stimuli such that the most arousing pictures (e.g., pictures of violence and sex) showed the highest levels of arousal on the large screen compared to the medium and small screens.

Simons RF, Detenber BH, Roedema TM and Reiss JE 1999 Emotion processing in three systems: the medium and the message, Psychophysiology 36: 619-627.

In the context of picture viewing, consistent and specific relationships have been found between two emotion dimensions ~valence and arousal! and self-report, physiological and overt behavioral responses. Relationships between stimulus content and the emotion-response profile can also be modulated by the formal properties of stimulus presentation such as screen size. The present experiment explored the impact of another presentation attribute, stimulus motion, on the perceived quality of the induced emotion and on its associated physiological response pattern. Using a within-subject design, moving and still versions of emotion-eliciting stimuli were shown to 35 subjects while facial muscle, heart rate, skin conductance, and emotion self-reports were monitored. The impact of motion was dramatic. Self-report and physiological data suggested strongly that motion increased arousal, had little impact on valence, and captured and sustained the subject’s attention to the image.

The Mann-Whitney U Test

There is a dire need for film scholars to understand elementary statistics if they intend to use it to analyse film style. See here for the problems a lack of statistical education creates.

This post will illustrate the use of the Mann-Whitney U test using the median shot lengths of silent and sound Laurel and Hardy short films produced between 1928 and 1933 (see here). I will also look at effect sizes for interpreting the result of the test. Before proceeding, it is important to note that the Mann-Whitney U test goes by many different names (Wilcoxon Rank Sum test, Wilcoxon-Mann-Whitney, etc) but that these are all the same test and give the same results (although they may come in a slightly different format).

The Mann-Whitney U test

The Mann-Whitney U test is a nonparametric statistical test to determine if there is a difference between two samples by testing if one sample is stochastically superior to the other (Mann and Whitney 1947). By stochastic ordering we mean that data values from one sample (X) are more likely to assume small values than the data values from another sample (Y) and that the data values in X are less likely to assume high values than Y.  If Fx(z) ≥ Fy(z) for all z, where F is the cumulative distribution function, then X is stochastically smaller than Y.

We want to find out if there is a difference between the median shot lengths of silent and sound films featuring Laurel and Hardy. The null hypothesis for our experiment is that

the two samples are stochastically equal

(Ho: Fsilent (z) = Fsound (z) for all z).

In other words, we assume that there is no difference between the samples – the median shot lengths of the silent films of Laurel and Hardy are no more likely to be greater or less than the median shot lengths of the sound films of Laurel. (See Callaert (1999) on the nonparametric hypotheses for the comparison of two samples).

In order to perform the Mann-Whitney U test we take our two samples – the median shot lengths of the silent and sound films – and we pool them together to form a single, large sample. We then order the data values from smallest to largest and assign a rank to each value. The film with the smallest median shot length has a rank 1.0, the film with second smallest median shot length has a rank of 2.0, and so on. If two or more films have a median shot length with the same value, then we give each film rank an average rank. For example, in Table 1 we see that five films have a median shot length of 3.3 seconds and that these films are 5th, 6th, 7th, 8th, and 9th in the ordered list. Adding together these ranks and dividing by the number of tied films gives us the average rank of each film: (5 + 6 + 7 + 8 + 9)/5 = 7.0.

Table 1 Rank-ordered median shot lengths of Laurel and Hardy silent (n = 12) and sound (n = 20) films

Notice that in Table 1, the silent films (highlighted blue) tend to be at the top of the table with lower rankings than the sound films (highlighted green) that tend to be in the bottom half of the table with the higher rankings. This is a very simple way to visual the stochastic superiority of the sound films in relation to the silent films. If the two samples were stochastically equal then we would see more mixing between the two colours.

Now all we need to do is to calculate the U statistic. First, we add up the ranks of the silent and sound films from Table 1:

Sum of ranks of silent films = R1 = 1.0 + 4.0 + 7.0 + 7.0 + 7.0 + 10.5 + 12.0 + 13.0 + 14.0 + 18.0 +18.0 +22.5 = 134.0

Sum of ranks of sound films = R2 = 2.0 + 3.0 + 7.0 + 7.0 + 10.5 + 18.0 + 18.0 + 18.0 + 18.0 +18.0 +22.5 +24.0 + 25.0 + 26.0 + 27.0 + 28.5 + 28.5 + 30.0 + 31.0 + 32.0 = 394.0

Next, we calculate the U statistics us the formulae:

where n1 and n2 are the size of the two samples, and R1 and R2 are the sum of ranks above. For the above data this gives us

We want the smallest of these two values of U, and the test statistic is, therefore, U = 56.0. (Note that U1 + U2 = n1 × n2 = 240).

To find out if this result is statistically significant we can compare it to a critical value for the two sample sizes: as n1 = 12 and n2 = 20, the critical value when α = 0.05, is 69.0. We reject the null hypothesis if the value of U we have calculated is less than the critical value, and as 56.0 is less than 69.0 we can reject the null hypothesis of stochastic equality in this case and conclude that there is a statistically significant difference between the median shot lengths of the silent films and those of the sound films. As the median shot lengths of the sound films tend to be larger than the median shot lengths of the silent films we can say that they are stochastically superior.

Alternatively, if our sample is large enough then U follows a normal distribution and we can calculate an asymptotic p-value using the following formulae:


For the above data, U = 56.0, μ = 120.0, and σ = 25.69. Therefore z = -2.49, and we can find the p-value from a standard normal distribution. The two-tailed p-value for this experiment is 0.013. (Note that ‘large enough’ is defined differently in different textbooks – some recommend using the z-transformation when both sample sizes are at least 20 whilst others are more generous and recommend that both sample sizes are at least 10).

If some more restrictive conditions are applied to the design of the experiment, then the Mann-Whitney U test is a test of a shift function (Y = X + Δ) for the sample medians and is an alternative to the t-test for the two-sample location problem. Compared to the t-test, the Mann-Whitney U test is slightly less efficient when the samples are large and normally distributed (ARE = 0.95), but may be substantially more efficient if the data is non-normal.

The Mann-Whitney U test should be preferred to the t-test for comparing the median shot lengths of two groups of films even if the samples are normal because the former is a test of stochastic superiority, while the latter is a test of a shift model and this is not an appropriate hypothesis for the design of our experiment. It simply doesn’t make sense to speak of the median shot length of a sound film in terms of a shift function as the median shot length of a silent film plus the impact of sound technology. You cannot take the median shot length of Steamboat Bill, Jr (X), add Δ number of seconds to it, and come up with the median shot length of Dracula (Y = X + Δ). Any such argument would be ridiculous, and only the null hypothesis of stochastic equality is meaningful in this context.

The probability of superiority

A test of statistical significance is only a test of the plausibility of the model represented by the null hypothesis. As such the Mann-Whitney U test cannot tell us how important a result is. In order to interpret the meaning of the above result we need to calculate the effect size.

A simple effect size that can be quickly calculated from the Mann-Whitney U test statistic is the probability of superiority, ρ or PS.

Think of PS in these terms:

You have two buckets – one red and one blue. In the red bucket you have 12 red balls, and on each ball is written the name of a silent Laurel and Hardy film and its median shot length. In the blue bucket you have 20 blue balls, and on each ball is written the name of a sound Laurel and Hardy film and its median shot length. You select at random one red ball and one blue ball and note down which has the larger median shot length. Replacing the balls in their respective buckets, you draw two more balls – one from each bucket – and note down which has the larger median shot length. You repeat this process again, and again, and again.

Eventually, after a large number of repetitions, you will have an estimate of the probability with which a silent films will have a median shot length greater than that of a sound film. (On Bernoulli trials see here).

The probability of superiority can be estimated without going through the above experiment: all we need to do is to divide the U statistic we got from the Mann-Whitney test by the product of the two sample sizes – PS = U/(n1 × n2). This is equal to the probability that the median shot length of a silent film (X) is greater than the median shot length of a sound film (Y) plus half the probability that the median shot length of a silent film is equal to the median shot length of a sound film: PS = Pr[X > Y] + (0.5 × Pr[X = Y]).

If the median shot lengths of all the silent films were greater than the median shot lengths of all the sound films, then the probability of randomly selecting a silent film with a median shot length greater than the median shot length of sound film is 1.0.

Conversely, if the median shot lengths of all the silent films were less than the median shot lengths of all the sound films, then the probability of randomly selecting a silent film with a median shot length greater than the median shot length of sound film is 0.0.

If the two samples overlap one another completely, then the probability of randomly selecting a silent film with a median shot length greater than the median shot length of sound film is equal to the probability of randomly selecting a silent film with a median shot length less than the median shot length of a sound film, and is equal to 0.5.

So if there is no effect PS = 0.5, and the further away PS is from 0.5 the larger the effect we have observed.

There are no hard and fast rules regarding what values of PS are ‘small,’ ‘medium,’ or ‘large.’ These terms need to be interpreted within the context of the experiment.

For the Laurel and Hardy data, we have U = 56.0, n1 = 12, and n2 = 20. Therefore, PS = 56/(12 × 20) = 56/240 = 0.2333.

Let us now compare the effect size for the Laurel and Hardy paper with the effect size from my study on the impact of sound in Hollywood in general (access the paper here). For the Laurel and Hardy data PS = 0.2333, whereas for the Hollywood data PS = 0.0558. In both studies I identified a statistically significant difference in the median shot lengths of silent and sound films, but it is clear that the effect size is larger in the case of the Hollywood films than for the Laurel and Hardy films.

The Hodges-Lehmann estimator

If we have designed our experiment to understand the impact of sound technology on shot lengths in Laurel and Hardy films around a null hypothesis of stochastic equality, then it makes no sense to subtract the sample median of the silent films from the sample median of the sound films because this implies a shift function and therefore a different experimental design and a different null hypothesis.

If we are not going to test for a classical shift model, how can we estimate the impact of sound technology on the cinema in terms of a slowing in the cutting rate?

To answer this question, we turn to the Hodges-Lehmann estimator for two samples (HLΔ), which is the median of the all the possible differences between the values on the two samples.

In Table 2, the median shot length of each of the Laurel and Hardy silent films is subtracted from the median shot length of each of the sound films. This gives us a total set of 240 differences (n1 × n2 = 12 × 20 = 240).

Table 2 Pairwise differences between the median shot lengths of Laurel and Hardy silent films (n = 12) and sound films (n = 20)

If we take the median of these 240 differences we have our estimate of the typical difference between the median shot length of a silent film and the median shot length of a sound film. Therefore, the average difference between the median shot lengths of the silent Laurel and Hardy films and the median shot lengths of the sound Laurel and Hardy films is estimated to be 0.5s (95%: 0.1, 1.1). (I won’t cover the calculation of the (Moses) confidence interval for the estimator HLΔ in this post, but for explanation see here).

The sample median of the silent films is 3.5s and for the sound films it is 3.9s, and the difference between the two is 0.4s, but as the shift function is an inappropriate design for our experiment this actually tells us nothing. Now it would appear that the difference between the two sample medians and HLΔ are approximately equal: 0.4s and 0.5s, respectively. But it is important to remember that they represent different things and have different interpretations. The difference between the sample medians represents a shift function, whereas the Hodges-Lehmann estimator is the average difference between the median shot lengths.

Note than we can calculate the Mann-Whitney U test statistic directly from the above table. If we count the number of times a silent film has a median shot length greater than that of a sound film (i.e Δ < 0, the green-highlighted numbers) and add this to half the number of times the silent and sound films have equal median shot lengths (i.e. Δ = 0, the red-highlighted numbers), then we have the Mann-Whitney U statistic that we derived above: U2 = 47 + (0.5 × 18) = 56. Equally, if we add the number of times a silent film has a median shot length less than that of sound film (i.e. Δ > 0, the blue-highlighted numbers) to half the number of times the medians are equal, then we have U1 = 175 + (0.5 × 18) = 184.

Bringing it all together

Once we have performed out hypothesis test, calculated the effect size, and estimated the effect we can present our results:

The median shot lengths of silent (n = 12, median = 3.5s [95% CI: 3.2, 3.7]) and sound (n = 20, median  = 3.9s [95% CI: 3.5, 4.3]) short films featuring Laurel and Hardy produced between 1927 and 1933 were compared using a Mann-Whitney U test, with a null hypothesis of stochastic equality. The results show that there is a statistically significant but small difference of HLΔ = 0.5s (95% CI: 0.1, 1.1) between the two samples (U = 56.0, p = 0.013, PS = 0.2333).

These two sentences provide a great deal of information to the reader in a simple and economical format – we have the experimental design, the result of the test, and the practical significance of the result.

Note that at no point in conducting this test have we employed a ‘dazzling array’ of mathematical operations – in fact the most complicated thing in the while process was to find the square root in the equation for σ above and everything else was numbering items in a list, addition, subtraction, multiplication, or division.

Summary

The Mann-Whitney U test is ideally suited to our needs in comparing the impact of sound technology on film style, and has numerous advantages over the alternative statistical methods:

  • it is covered in pretty much every statistics textbook you are ever likely to read
  • it is a standard feature in statistical software (though you will have to check which name is used) and so you won’t even have to do the basic maths described above
  • it is easy to calculate
  • it is easy to interpret
  • it allows us to test for stochastic superiority rather than a shift model
  • it is robust against outliers
  • it does not depend on the distribution of the data
  • it can be used to determine an effect size (PS) that is easy to calculate and simple to understand
  • we have a simple estimate of the effect (HLΔ) that is consistent with the test statistic

If you want to compare more than two groups of films, then the non-parametric k-sample test is the Kruskal-Wallis ANOVA test (see here). The Mann-Whitney U test can also be applied as post-hoc test for pairwise comparisons.

References and Links

Callaert H 1999 Nonparametric hypotheses for the two-sample location problem, Journal of Statistics Education 7 (2): www.amstat.org/publications/jse/secure/v7n2/callaert.cfm.

Mann HB and Whitney DR 1947 On a test of whether one of two random variables is stochastically larger than the other, The Annals of Mathematical Statistics 18 (1): 50-60.

The Wikipedia page for the Mann-Whitney U test can be accessed here, and the page for the Hodges-Lehman estimator is here.

For an online calculator of the Mann-Whitney U test you can visit Vassar’s page here.

For the critical values of the Mann-Whitney U test for samples sizes up to n1 = n2 = 20 and α = 0.05 or 0.01, see here.

More Visual Illusions

I like visual illusions – though I must admit that the rotating snakes (Figure 1) from Professor Akiyoshi Kitaoka’a illusion pages makes me feel somewhat queasy.

Figure 1 Rotating snakes from Akiyoshi’s illusion pages (click on the image for a larger version or got to http://www.ritsumei.ac.jp/~akitaoka/index-e.html to see the illusion in all its glory).

You can find other versions of this illusion and many others at Akiyoshi’s illusion pages here, along with research papers that discuss the psychological basis of the illusions he features. An fMRI study of the above illusion is Kuriki I, Ashida H, Murakami I, and Kitaoka A 2008 Functional brain imaging of the Rotating Snakes illusion by fMRI, Journal of Vision 8 (10): 16, 1-10, and can be accessed here.

The Daily Cognition has twenty visual illusions here.

VisualIllusion.net presents a study of illusions from 1922 – Matthew Luckiesh’s Visual Illusions: Their Causes, Characteristics and Applications – in its entirety.

An interesting introduction to the role of visual illusions in psychological research is David Eagleman’s article on how the study of visual illusions has guided neuroscience research: Eagleman DM 2001 Visual illusions and neurobiology, Nature Reviews Neuroscience 2: 920-926.

Richard Gregory, who died in May of last year, conducetd a large amount of research on visual perception and illusions, and the website dedicated to his memory – and featuring some of his papers on illusion and perception – can be accessed here.

Another researcher in visual illusions is Cornelia Fermüller, and her website can be found here, and includes examples of illusions and her research on a computational theory of optical illusions in video sequences and stereo images. One paper worth reading is Ogale AS, Fermüller C, and Aloimonos Y 2005  Motion segmentation using occlusions, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (6): 988-992.

We examine the key role of occlusions in finding independently moving objects instantaneously in a video obtained by a moving camera with a restricted field of view. In this problem, the image motion is caused by the combined effect of camera motion (egomotion), structure (depth), and the independent motion of scene entities. For a camera with a restricted field of view undergoing a small motion between frames, there exists in general a set of 3D camera motions compatible with the observed flow field even if only a small amount of noise is present, leading to ambiguous 3D motion estimates. If separable sets of solutions exist, motion-based clustering can detect one category of moving objects. Even if a single inseparable set of solutions is found, we show that occlusion information can be used to find ordinal depth, which is critical in identifying a new class of moving objects. In order to find ordinal depth, occlusions must not only be known, but they must also be filled (grouped) with optical flow from neighboring regions. We present a novel algorithm for filling occlusions and deducing ordinal depth under general circumstances. Finally, we describe another category of moving objects which is detected using cardinal comparisons between structure from motion and structure estimates from another source (e.g., stereo).

This paper from Alex Holcombe looks at the illusion of motion perception in the cinema:

Holcombe AO 2009 Seeing slow and seeing fast: two limits on perception, Trends in Cognitive Neuroscience 13 (5): 216-221. [This paper contains links to movies as part of the paper’s supplementary materials].

Video cameras have a single temporal limit set by the frame rate. The human visual system has multiple temporal limits set by its various constituent mechanisms. These limits seem to form two groups. A fast group comprises specialized mechanisms for extracting perceptual qualities such as motion direction, depth and edges. The second group, with coarse temporal resolution, includes judgments of the pairing of color and motion, the joint identification of arbitrary spatially separated features, the recognition of words and high-level motion. These temporally coarse percepts might all be mediated by high-level processes. Working at very different timescales, the two groups of mechanisms collaborate to create our unified visual experience.

Mel Slater’s paper looks at why we experience a sense of immersion in artificially created environments.

Slater M 2009 Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments, Philosophical Transactions of the Royal Society B 364 (1535): 3549-3557.

In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not ‘there’ and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality.

A different approach to the nature of immersion in visual perception of animated images can be found in this paper from Kenny Chow and Fox Harrell:

Chow KKN and Harrell DF 2009 Material-based imagination: embodied cognition in animated images, Cognition and Creativity, Digital Arts and Culture 2009, Arts Computation Engineering, UC Irvine, California, http://escholarship.org/uc/item/6fn5291r;jsessionid=A526BE7A0733A59BB004CD0AC05A9EB6.

Drawing upon cognitive science theories of conceptual blending and material anchors, as well as recent neuroscience results regarding mirror neurons, we argue that animated visual graphics, as embodied images whose understanding relies on our perceptual and motor apparati, connect both material and mental notions of images. Animated visual images mobilize a reflective process in which material-based imaginative construction and elaboration can take place. We call this process as “material-based imagination,” in contrast to the general notion of imagination as purely a mental activity. This kind of imagination is pervasive in today’s digitally mediated environments. By analyzing a range of digital artifacts from computer interfaces to digital artworks, we show the important role of imaginative blends of concepts in making multiple levels of meaning, including visceral sensation and metaphorical narrative imagining, to exemplify expressiveness and functionality. The implications of these analyses collectively form a step toward an embodied cognition approach to animation phenomena and toward recentralizing understanding of artistic and humanistic production in cognitive research.

Finally, the 2011 finalists for the Illusion of the Year contest can be found here.

Claude Hamilton Verity III

I have drawn attention to the Leeds inventor Claude Hamilton Verity and his efforts to develop a sound-on-disc system for the synchronization of image and sound in two earlier posts that can be accessed here and here. This week I bring to your attention some other references to Verity I have come across recently.

First, an article by Frank H. Lovette and Stanley Watkins, titled ‘Twenty Years of “Talking Movies:” an Anniversary’ and published in the 1946 volume of Bell Telephone Magazine, refers to Verity as someone who made a notable contribution to the development of talking pictures alongside such illustrious names as Thomas A. Edison, Pathé Freres, Leon Gaumont, and Orlando E. Kellum. The article can be accessed here.

The authors clearly do not take The Jazz Singer in 1927 to be point at which pictures began to talk, and instead choose as their starting point the demonstration of the Vitaphone system on 6 August, 1926, for the screening of Don Juan, starring John Barrymore. This is unsurprising given that Bell was itself involved in the development of this system, but they do describe this screening somewhat poetically:

Before the applause could die away, the dramatic sequences of Don Juan unfolded against their synchronized musical background. Scientists, public officials, prominent figures from many walks of life sat in amazement until the last crescendo and finale of this scientific marvel. The men who brought it into being by their refinement of existing arts were hailed as having made possible “the greatest invention of the twentieth century.” And Dr. Michael I. Pupin was led to exclaim that “no closer approach to resurrection has ever been made by science.” The pioneers of Western Electric and Bell Telephone Laboratories and their collaborators of Warner Brothers and Vitaphone experienced that night a measure of accomplishment which few men of science ever live to taste or see.

We can forgive the authors a touch of hyperbole when writing about Bell-developed technology in a Bell-funded journal, but this raises an interesting question about when we should date the earliest successful demonstration of synchronized sound in cinema. There were other inventors to successfully demonstrate the synchronization of sound and image prior to 1926, including Kellum’s Photo-kinema system and Verity’s system both of which were demonstrated in 1921. D.W. Griffith used the Photo-kinema system for Dream Street, which premiered on 2 May, 1921, with two sound segments; and we have reports of the demonstration of two original shorts produced by Verity in Harrogate on 30 April, 1921 (see the first link above). We also know that in November 1923, Verity sailed to New York to meet with J. Stuart Blackton of the Vitagraph Film Company and gave an interview to The New York Times regarding the synchronization of sound and image in January 1924 (see the second link above). We do not know what impact Verity’s work in England had – if any – on the development of ‘the greatest invention of the twentieth century.’

The article refers to Verity’s system as Veritiphone, but this term appears only infrequently in other articles.

Second, two articles in the Wellington Evening Post from 1921 and 1923 refer to Verity’s efforts. These articles are available from Papers Past at the National Library of New Zealand, and rather wonderfully, they can be reproduced under a creative commons licence.

The first article was published on 3 June, 1921, and is largely a direct quote from an earlier article published in The Daily Mail. I have not found this earlier article, but given the timing I assume that the demonstration referred to was the one that took place in Harrogate in April 1921.

TALKING FILMS

PERFECT VOICE MOVEMENT CLAIM

Talking kinema films, it is claimed, have definitely advanced a stage as ‘the result of the invention of a synchroniser by Mr. Claude H. Verity, a Harrogate engineer. With this instrument in the projector-box, it is stated, an operator, by simply sliding a knob quite independently of watching the screen, can work synchronisation to 1-24th of a second.

In a Harrogate building where secrecy has been maintained for nearly five years of experimenting, writes a correspondent to the London Daily Mail, who has witnessed a straight drama and cross-talk comedy exhibited in conjunction with a gramophone. “There was no mistaking the accuracy of voice and lip movement. If it should vary a tenth of a second it would be due to the fact that the actors were so much out in repeating for the gramophone recorder what they had done for the screen. These processes are separate and are linked up by an expert stenographer.

“The synchroniser does away with the necessity for stopping the action of a picture to introduce worded explanations; indeed, dialogue becomes a distinct part of the picture.

“For operas with singing and music a child could work it because there is a fixed tempo. Should the film break the speaking can be stopped and taken up again.”

A great advantage of the invention, it is urged, is that with the apparatus in projecting-boxes the synchronised film could be circulated in the ordinary way.

The two films referred to above would be The Playthings of Fate (the drama) and A Cup of Beef Tea (the comedy). I would assume that this is the first time the term ‘cross-talk comedy’ is used in reference to the cinema.

The second article was published on 1 September 1923, and is only a passing reference to Verity as part of a much larger piece.

Synchronization of the film and its musical counterpart seems to be solved by the “Veritphone,” an invention of Claude H. Verity, of Leeds, England. It aims at the alliance of sound and movement by the combination of a double set of “super-gramophones,” and an ingenious indicator, which shows when the film and the sound record are together.

Details of Verity’s patents that give a more detailed explanation of how the system worked can be accessed in my earlier posts.

Third, and slightly confusingly, there is another reference to Verity derived from an article published in The Daily Mail in De Sumatra Post published on 11 November 1922. I have no idea what this says because it is in Dutch. The complete issue of De Sumatra Post can be downloaded as a pdf file here (it’s about 7.1 MB and I think it is from the Dutch equivalent of Papers Past), and the short piece referring to Verity is at the bottom of page 14.

Fourth, a notice in The Electrical Review 90 1922: 416 announces the successful demonstration of Verity system at the Albert Hall in Leeds in 1922 (the date is given as 3 March whereas other articles give the date as 3 April), noting that

By experiment over a considerable time past, Mr. Verity has provided an apparatus which certainly yields co-timing of the lip movements of the persons on the screen with the sounds emitted from the electrically-controlled gramophone, …

By the time of his 1922 demonstrations, Verity had spent at least 5 years and (by his own estimation) some £7000 of his own money developing his synchronisation system.

Finally, and a good deal less wonderful than anything from New Zealand, is a reference to Verity in an article published in Political Science Quarterly in 1948. The full reference is Swensen J 1948 The entrepreneur’s role in introducing the sound motion picture, Political Science Quarterly 63 (3): 404-423. I do not know how this piece refers to Verity – it may be only as a name in a footnote, possibly derived from the Bell Telephone Magazine article referred to above – because the article lies behind a paywall at JSTOR. There is no good reason why an article from 1948 should be behind a paywall in 2011.

 

 

Empirical studies in film style IV

It has been a while since I listed some research on the empirical analysis of film style – I could have sworn I did a post on this just before christmas, but apparently not.

First, a couple of general papers that outline the principles of video content analysis (VCA) and the research that has been done in this area. This piece (here) is a set of power point slides by Alan Hanjalic (see below), in which he summarises the goals of VCA, its applications, and the different approaches that have been adopted by researchers. A literature survey of work in VCA is given in the following paper:

Brezeale D and Cook DJ 2008 Automatic video classification : a survey of the literature, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 38 (3): 416-430. DOI: 10.1109/TSMCC.2008.919173.

Earlier posts on the empirical analysis of film style can be accessed here, here, and here.

The papers referred to below all cover the relationship between emotion, style, and video content.

Arifin S and Chueng PYK 2006 User attention based arousal content modelling, IEEE International Conference on Image Processing, 8 November 2006, Atlanta, Georgia, USA.

Abstract

The affective content of a video is defined as the expected amount and type of emotion that are contained in a video. Utilizing this affective content will extend the current scope of application possibilities. The dimensional approach to representing emotion can play an important role in the development of an affective video content analyzer. The three basic affect dimensions are defined as valence, arousal and control. This paper presents a novel FPGA-based system for modeling the arousal content of a video based on user saliency and film grammar. The design is implemented on a Xilinx Virtex-II xc2v6000 on board a RC300 board.

The poster for this paper can be accessed here.

Hanjalic A 2006 Extracting moods from pictures and sounds, IEEE Signal Processing Magazine 23 (2): 90-100. DOI: 10.1109/MSP.2006.1621452.

From the introduction:

Intensive research efforts in the field of multimedia content analysis in the past 15 years have resulted in an abundance of theoretical and algorithmic solutions for extracting the content-related information from audiovisual signals. The solutions proposed so far cover an enormous application scope and aim at enabling us to easily access the events, people, objects, and scenes captured by the camera, to quickly retrieve our favorite themes from a large music video archive (e.g., a pop/rock concert database), or to efficiently generate comprehensive overviews, summaries, and abstracts of movies, sports TV broadcasts, surveillance, meeting recordings, and educational video material. However, what about the task of finding exciting parts of a sports TV broadcast or funny and romantic excerpts from a movie? What about locating unpleasant video clips we would be reluctant to let our children watch? This article considers how we feel about the content we see or hear. As opposed to the cognitive content information composed of the facts about the genre, temporal content structure (shots, scenes) and spatiotemporal content elements (objects, persons, events, topics) we are interested in obtaining the information about the feelings, emotions, and moods evoked by a speech, audio, or video clip. We refer to the latter as the affective content, and to the terms such as “happy ” or “exciting ” as the affective labels of an audiovisual signal.

Hanjalic A and Xu L 2005 Affective video content and representation modelling, IEEE Transactions on Multimedia 7 (1): 143-154. DOI: 10.1109/TMM.2004.840618.

Abstract

This paper looks into a new direction in video content analysis – the representation and modelling of affective video content. The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modelling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect).We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer.

Machajdik J and Hanbury A 2010 Affective image classification using features inspired by psychology and art theory, ACM Multimedia Conference 25-29 October 2010, Firenze, Italy.

Abstract

Images can affect people on an emotional level. Since the emotions that arise in the viewer of an image are highly subjective, they are rarely indexed. However there are situations when it would be helpful if images could be retrieved based on their emotional content. We investigate and develop methods to extract and combine low-level features that represent the emotional content of an image, and use these for image emotion classification. Specifically, we exploit theoretical and empirical concepts from psychology and art theory to extract image features that are specific to the domain of artworks with emotional expression. For testing and training, we use three data sets: the International Affective Picture System (IAPS); a set of artistic photography from a photo sharing site (to investigate whether the conscious use of colors and textures displayed by the artists improves the classification); and a set of peer rated abstract paintings to investigate the influence of the features and ratings on pictures without contextual content. Improved classification results are obtained on the International Affective Picture System (IAPS), compared to state of the art work.

This paper does not relate specifically to film, but I include it anyway becuase it is interesting to read alongside the other papers listed here and in the context of cognitive film theory. The pdf linked to for this paper is over 10MB, so it may be quite slow to download.

Soleymani M, Chanel G, Kierkels JJK, and Pun T 2008 Affective characterization of movie scenes based on multimedia content analysis and user’s physiological emotional responses, IEEE International Symposium on Multimedia, 15-17 December 2008, Berkeley, California, USA [Abstract only].

Abstract

In this paper, we propose an approach for affective representation of movie scenes based on the emotions that are actually felt by spectators. Such a representation can be used for characterizing the emotional content of video clips for e.g. affective video indexing and retrieval, neuromarketing studies, etc. A dataset of 64 different scenes from eight movies was shown to eight participants. While watching these clips, their physiological responses were recorded. The participants were also asked to self-assess their felt emotional arousal and valence for each scene. In addition, content-based audio- and video-based features were extracted from the movie scenes in order to characterize each one. Degrees of arousal and valence were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based features. We showed that a significant correlation exists between arousal/valence provided by the spectator’s self-assessments, and affective grades obtained automatically from either physiological responses or from audio-video features. This demonstrates the ability of using multimedia features and physiological responses to predict the expected affect of the user in response to the emotional video content.

Yoo HW and Cho SB 2007 Video scene retrieval with interactive genetic algorithm, Multimedia Tools and Applications 34 (3): 317-336. DOI: 10.1007/s11042-007-0109-8.

Abstract

This paper proposes a video scene retrieval algorithm based on emotion. First, abrupt/gradual shot boundaries are detected in the video clip of representing a specific story. Then, five video features such as “average colour histogram,” “average brightness,” “average edge histogram,” “average shot duration,” and “gradual change rate” are extracted from each of the videos, and mapping through an interactive genetic algorithm is conducted between these features and the emotional space that a user has in mind. After the proposed algorithm selects the videos that contain the corresponding emotion from the initial population of videos, the feature vectors from them are regarded as chromosomes, and a genetic crossover is applied to those feature vectors. Next, new chromosomes after crossover and feature vectors in the database videos are compared based on a similarity function to obtain the most similar videos as solutions of the next generation. By iterating this process, a new population of videos that a user has in mind are retrieved. In order to show the validity of the proposed method, six example categories of “action,” “excitement,” “suspense,” “quietness,” “relaxation,” and “happiness” are used as emotions for experiments. This method of retrieval shows 70% of effectiveness on the average over 300 commercial videos.

Finally, a report from a couple of years ago that appeared in IEEE Spectrum about a jacket that lets you “feel the movies” to add a sense of touch to the emotional events in a film.

The jacket contains 64 independently controlled actuators distributed across the arms and torso. The actuators are arrayed in 16 groups of four and linked along a serial bus; each group shares a microprocessor. The actuators draw so little current that the jacket could operate for an hour on its two AA batteries even if the system was continuously driving 20 of the motors simultaneously.

So what can the jacket make you feel? Can it cause a viewer to feel a blow to the ribs as he watches Bruce Lee take on a dozen thugs? No, says Lemmens. Although the garment can simulate outside forces, translating kicks and punches is not what the actuators are meant to do. The aim, he says, is investigating emotional immersion.

The article can be accessed here.

Research on film industries

THis week a collection of articles looking at film industries from perspectives that are typically different from that typically found in film studies. As usual, the version linked to may not be the final version published.

There is a lot of interesting research of film industries available through the Copenhagen Business School’s Knowledge portal (here), and by searching its research database and open archive. The CBS has a robust approach to open access and most of the research is available in English. Topics include:

  • The internationalization of the Indian film industry
  • City branding and film festivals
  • Film labour markets
  • The Danish film industry
  • Globalization and the cinema

Bakker G 2004 At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938, Working Paper 81, Department of Economic History, London School of Eocnomics.

This paper estimates and compares the benefits cinema technology generated to society in Britain, France and the US between 1900 and 1938. It is shown how cinema industrialised live entertainment, by standardisation, automation and making it tradable. The economic impact is measured in three ways: TFP-growth, social savings in 1938 and the consumer surplus enjoyed in 1938. Preliminary findings suggest that the entertainment industry accounted for 1.5 to 1.7 percent of national TFP-growth and for 0.9 to 1.6 percent of real GDP-growth in the three countries. Social savings were highest in the US (c. 2.5 billion dollars and three million workers) and relatively modest in Britain and France, possibly because of the relative abundance of skilled live-entertainment workers. Comparative social savings at entertainment PPP-ratios inflate British social savings to above the US level. Converging exchange rates and PPP price ratios suggest rapid international market integration. The paper’s methodology and findings may give insight in technological change in other service industries that were also industrialised.

Cazetta S 2010 Cultural clusters and the city: the example of Filmbyen in Copenhagen, ACEI 16th International Conference on Cultural Economics, 9-12 June 2010, Copenhagen, Denmark.

This paper explores the origins and development of Filmbyen (FilmCity), a media hub created around Lars von Trier‟s film company Zentropa in the outskirts of Copenhagen.

In the first part of the paper the theoretical framework is introduced, with a review of the relevant literature concerning the role of culture in urban development and with a focus on clustering in the cultural industries.

Subsequently, after analyzing what kind of impact the film industry has on local economic development, and more specifically what role it plays in urban and regional development strategies (looking at Greater Copenhagen), the case of Filmbyen is studied in detail. The location patterns of film and film-related companies based in this special district are investigated with a small-scale survey – observing in particular what are the advantages of clustering, what networks are created, what kind of urban environment comes about.

Coe NM 2000 The view from out West: embeddedness, inter-personal relations and the development of an indigenous film industry in Vancouver, Geoforum 31: 391-407.

This paper considers the development of a particular cultural industry, the indigenous film and television production sector, in a specific locality, Vancouver (British Columbia, Canada). Vancouver’s film and television industry exhibits a high level of dependency on the location shooting of US funded productions, a relatively mobile form of foreign investment capital. As such, the development of locally developed and funded projects is crucial to the long-term sustainability of the industry. The key facilitators of growth in the indigenous sector are a small group of independent producers that are attempting to develop their own projects within a whole series of constraints apparently operating at the local, national and international levels. At the international level, they are situated within a North American cultural industry where the funding, production, distribution and exhibition of projects is dominated by US multinationals. At the national level, both government funding schemes and broadcaster purchasing patterns favour the larger production companies of central Canada. At the local level, producers have to compete with the demands of US productions for crew, locations and equipment. I frame my analysis within notions of the embeddedness or embodiment of social and economic relations, and suggest that the material realities of processes operating at the three inter-linked scales, are effectively embodied in a small group of individual producers and their inter-personal networks.

Hoefert de Turégano T 2006 Public Support for the International Promotion of European Films, European Audiovisual Observatory.

Jones C 2001 Co-evolution of entrpreneurial careers, institutional rules, and competitive dymanics in American Film, 1895-1920, Organization Studies 22 (6): 911-944.

An historical case analysis of the American film industry is undertaken to gain a better understanding of the co-evolutionary processes of entrepreneurial careers, institutional rules and competitive dynamics in emerging industries. The study compares technology and content-focused periods, which were driven by entrepreneurs with different career histories and characterized by distinct institutional rules and competitive dynamics. Archival data and historical analysis is used to trace how entrepreneurial careers, firm capabilities, institutional rules, and competitive dynamics co-evolved. A co-evolutionary perspective is integrated with insights from institutional and resource-based theories to explain how the American film industry emerged, set an initial trajectory with specific institutional rules and competitive dynamics, and then changed.

Mezias Sj and Kuperman JC 2001 The community dynamics of entrepreneurship: the birth of the American film industry, 1895-1929, Journal of Business Venturing 16 (3): 209-233. [NB: this is not the full abstract, which is actually longer than some research papers].

This paper provides insight for practitioners by exploring the collective process of entrepreneurship in the context of the formation of new industries. In contrast to the popular notions of entrepreneurship, with their emphasis on individual traits, we argue that successful entrepreneurship is often not solely the result of solitary individuals acting in isolation. In many respects, entrepreneurs exist as part of larger collectives. First and foremost, there is the population of organizations engaging in activities similar to those of the entrepreneurial firm, which constitute a social system that can affect entrepreneurial success. In addition, there is also a community of populations of organizations characterized by interdependence of outcomes. Individual entrepreneurs may be more successful in the venturing process if they recognize some of the ways in which their success may depend on the actions of entrepreneurs throughout this community. Thus, we urge practitioners and theorists alike to include a community perspective in their approach to entrepreneurship. We also suggest that one way of conceptualizing the community of relevance might be in terms of populations of organizations that constitute the value chain. For example, in the early film industry a simple value chain with three functions—production, distribution, and exhibition—is a convenient heuristic for considering what populations of organizations might be relevant. As we show in our case study of that industry, a community model offers insights into the collective nature of entrepreneurship and the emergence of new industries.

Orbach BY and Einav L 2007 Uniform prices for differentiated goods: the case of the movie-theater industry, International Review of Law and Economics 27 (2): 129-153.

Since the early 1970s, movie theaters in the United States have employed a pricing model of uniform prices for differentiated goods. At any given theater, one price is charged for all movies, seven days a week, 365 days a year. This pricing model is puzzling in light of the potential profitability of prices that vary with demand characteristics. Another unique aspect of the motion-picture industry is the legal regime that imposes certain constraints on vertical arrangements between distributors and retailers (exhibitors) and attempts to facilitate competitive bidding for films. We explore the justifications for uniform pricing in the industry and show their limitations. We conclude that exhibitors could increase profits by engaging in variable pricing and that they could do so more easily if the legal constraints on vertical arrangements are lifted.

 

Shot length distributions in the films of Laurel and Hardy

UPDATE: 28 June 2012 – this article has now been published as Shot length distributions in the short films of Laurel and Hardy, 1927 to 1933, Cine Forum 14 2012: 37-71.

This week I put up the first draft of my analysis of the impact of sound technology on the distribution of shot lengths in the short films of Laurel and Hardy from 1927 to 1933. The pdf file can be accessed here: Nick Redfern – Shot length distributions in the short films of Laurel and Hardy.

Abstract

Stan Laurel and Oliver Hardy were one of the few comedy acts to successfully make the transition from the silent era to sound cinema in the late-1920s. The impact of sound technology on Laurel and Hardy films is analysed by comparing the median shot lengths and the dispersion of shot lengths of silent shorts (n = 12) produced from 1927 to 1929 inclusive, and sound shorts (n = 20) produced from 1929 to 1933, inclusive. The results show that there is a significant difference (U = 56.0, p = 0.0128, PS = 0.2333) between the median shot lengths of the silent films (median = 3.5s [95% CI: 3.2, 3.7]) and those of the sound films (median = 3.9s [95% CI: 3.5, 4.3]); and this represents an increase in shot lengths in the sound films by HLΔ = 0.5s (95% CI: 0.1, 1.1). The comparison of Qn for the silent films (median = 2.4s [95% CI: 2.1, 2.7]) with the sound films (median = 3.0s [95% CI: 2.6, 3.4]) reveals a statistically significant increase is the dispersion of shot lengths (U = 54.5, p = 0.0109, PS = 0.2271) estimated to be HLΔ = 0.6s (95% CI: 0.1, 1.1). Although statistically significant, these differences are smaller than those reported in other quantitative analyses of film style and sound technology, and this may be attributed to Hal Roach’s commitment to pantomime, the working methods of Laurel, Hardy, and their writing/producing team, and the continuity of personnel in Roach’s unit mode of production which did not change substantially with the introduction of sound.

UPDATE: 25 November 2010. WordPress have now very helpfully made it possible to upload Excel spreadsheets to blogs, and so I have replaced the Word file with an Excel file that is much easier to use. This data also now includes information of which shots are titles (as idicated by a T in an adjacent column). I accept no libaility for any problems you may have when downloading and using Excel spreadsheets on you computer. The data used in this study can be accessed in the form of an Excel .xls file here: Nick Redfern Laurel and Hardy shot lengh data. The methodology behind the sources and collection of this data is described in the above paper.

Shot length distributions in German cinema, 1929 to 1933

This post compares the shot median shot lengths and the dispersion of shot lengths in German films from 1929 to 1933, inclusive. The films are grouped by year, and can also be divided into the silent films of 1929 and the sound films of the other years.

Methods

Shot length data was collected from the Cinemetrics database for 67 films released from 1929 to 1933, inclusive.

As the distribution of shot lengths in a motion picture are typically asymmetric with a number of outliers, the median shot length is used as a robust measure of location because it is not dependent on an underlying probability distribution and has a high breakdown point. The estimator Qn is used as a robust measure of scale, and calculates the distance of each data point from every other . Qn has a breakdown point of 50% and a bounded influence function, and is therefore robust. As this estimator is not dependent upon an underlying probability distribution or a measure of location, it is appropriate for the asymmetric distributions typically encountered in the cinema. For details on how to calculate Qn see here.

Kruskal-Wallis analysis of variance (corrected for ties) was used as an omnibus test of the difference between the films grouped by year, at a significance level of 0.05. If this test returned a significant result Dunn’s post-hoc test (corrected for ties) was employed for the pairwise comparison of groups, using a critical z-value of 2.3263 at a significance level of p = 0.01.

Effect sizes of difference between groups were estimated using the Hodges-Lehmann median difference of pairwise comparisons (HLΔ), and this result is reported with a distribution free (Moses) confidence interval.

All calculations were performed using Microsoft Excel 2007.

Results

The statistical data for each film is given in Tables 1 through 5. Shot length data for these films is presented in Figure 1 for the median shot lengths and Figure 2 for Qn.

For the median shot lengths, the results show that there is a statistically significant difference (KW-ANOVA: Hc = 14.0359, p = 0.0064). Group comparisons were carried out using a Dunn post-hoc test, which provided significant results for the silent 1929 films with the sound films in 1930 (Tc = 3.5482), 1931 (Tc = 2.4476), 1932 (Tc = 2.5739), and 1933 (Tc = 2.8444). There are no significant differences in the distribution of the median shot lengths for any other pairwise comparisons.

Turning to Qn, the same patterns we see for the median shot lengths are evident. There is a statistically significant difference (KW-ANOVA: Hc = 19.4967, p = 0.0006); and that this difference occurs in the pairwise comparisons between 1929 and 1930 (Tc = 4.1611), 1929 and 1931 (Tc = 2.9438), 1929 and 1932 (Tc = 2.9669), and 1929 and 1933 (Tc = 3.2416), while there are no significant differences for any other pairwise comparisons.

Table 1 Median shot length and Qn for German films released in 1929 (n = 12)

Table 2 Median shot length and Qn for German films released in 1930 (n = 11)

Table 3 Median shot length and Qn for German films released in 1931 (n = 14)

Table 4 Median shot length and Qn for German films released in 1932 (n = 17)

Table 5 Median shot length and Qn for German films released in 1933 (n = 13)

There is clearly a difference in the style of the silent films of 1929 (n = 12) when compared with the sound films from 1930 to 1933 (n = 55). The sample median of the median shot lengths for films released in 1929 is 3.8s (95% CI: 2.8, 4.7) with an interquartile range of 1.6s, and for Qn is 2.6s (95% CI: 1.6, 3.5) and IQR = 1.5s. The sample median of the median shot lengths for films released between 1930 and 1933 is 6.1s (95% CI: 5.5, 6.7) with IQR = 2.9, and for Qn is 5.5s (95% CI: 4.9, 6.1) and IQR = 2.4s. Dividing the sample into silent and sound films, the change in the median shot lengths is estimated to be an increase of HLΔ = 2.2s (95% CI: 1.0, 3.4) and the change in the dispersion of shot lengths is estimated to be an increase of HLΔ = 2.6s (95% CI: 1.5, 3.5). From these results we can say that the stylistic changes that occur in German cinema with the coming of sound is (1) a slowing in the rate at which films are cut and (2) an increase in the dispersion of shot lengths in German cinema. This difference can be clearly seen in the box plots of these samples in Figures 1 and 2.

Figure 1 The distribution of median shot lengths for films produced in Germany 1929 to 1933, inclusive

Figure 2 The distribution of Qn for films produced in Germany 1929 to 1933, inclusive

Comparing these results to earlier results posted on this blog for Hollywood and German cinema (see here and here), we can that the change in film style that occurred in Hollywood with the introduction of sound technology occur in Germany, only after they have already occurred in Hollywood.

Follow

Get every new post delivered to your Inbox.

Join 61 other followers