Monday, 30 November 2020

Final Projection Mapping experiment



 


The final projection mapping video is split into 3 sections. Each section is designed to represent Michel Chion's audiovisual theories, namely his findings on image temporisation.


Exhibit A

"sound renders the perception of time in the image as exact, detailed, immediate, concrete"

the shot is “going somewhere and it is oriented in time"

The first experiment progressively shows how sound synchronised with visuals can create a sense of anticipation. Skream's Settled is the electronic track I chose to accompany the projections. As each note sounds, windows appear to match them, the stairs and windows move up and down according to the pitch of the note playing to mimic a/v animation.

Contradictions

This can be achieved with more accurately by making keyframes reactant to the music, a practice which I will look into.
A/v artists can perfectly time the visuals making the practice somewhat obsolete.


Exhibit B

  “sound vectorises or dramatises shots, orienting them toward a future, a goal, and creation of a feeling of imminence”

"temporisation also depends on the type of sound present. depending on density, internal texture, tone quality. and progression, a sound can temporally animate an image to a greater or lesser degree, and with a more or less driving or restrained rhythm."

The second experiment lets the image take the back seat. As the music builds up, the shapes on the building shift. Although the audiovisual connection is limited to our processing of the two as individuals, the visuals do assist in creating anticipation. The visuals, which crawl along with the build up, help to slow the temporality of the scene before the climax of the song.

From an animation point of view, the frames could have been more cleanly done. the outline should have been a pre composition with the shifting colour sitting on a lower layer in After effects. 

The corner pin tool could have been used more accurately throughout the whole project. I sometimes used it manually where it could have been more accurately by typing their exact coordinates


Exhibit C

 “Movement of characters or objects, movement of smoke or light”

“Here, sound’s temporality combines with the temporality already present in the image. The two may move in concert or slightly at odds with each other, in the same manner as two instruments playing simultaneously.”


extra animation

  


A spare animation pairing untimed music and visuals. By amalgamating them randomly I hoped to show image temporisation through motion and fast paced music. The video was not used in the installation but showed some of the practice I have been exploring.


The final rendered animation (not projected)

Tuesday, 17 November 2020

Beginning Projection Mapping




In order to get the projection mapping studio functional, I made my projector a secondary monitor within After Effects. With pre drawn dimensions, I could find the correct measurements and perspective using the corner pin tool. 


Now the walls and windows had been matched up, I am able to reveal parts of the house and match it to music. this works effectively in creating audiovisual stimuli. I have also looked at lighting the linear design with pulsing colours. Using Skream's Settled (which contains electronic cues) I was able to explore Chion's most simple theory of matching sound perfectly to visuals. This is resonant of A/V visuals using geometric shape in time with the music.

Moving forward, I will apply more methods of animation to test the selected theories from Chion's studies. Also extra dimensions will need to be drawn up such as the roof and stairs, making the animation more complex but more three dimensional. 

Monday, 16 November 2020

Chapters

 4. Chapters


Introduction (500 words)

Tarkovski - called cinema “the art of sculpting in time”

“The motion picture can be thought of as a program. And it is more precisely a program than either a language or a mere set of stimuli. It is a very complex set of instructions utilising images, actions and sounds, a string of commands to attend to this now, in this light, from this angle, at this distance, and so forth”

“To recall earlier sequences and anticipate future ones” (p.12)


A brief history of sound and image in cinema (500 words)

Early opinions of the purpose of film

Andre Bazin - “For Bazin, the film was neither a product of the mind nor a clash of concepts, but rather a photochemical record of reality.” (p.5) - film was not considered an art form.

“An apparent advantage to writing film from this perspective is that film writers may feel free to appropriate the language of psychoanalysis, Marxism, or any pop culture movement or special interest cause, without assuming responsibility for their theoretical imperatives. Such writers may claim adherence to no theory at all.” 

Started with political undertones - film was a method of propaganda, promotion of an ulterior motive without consequence

“By the mid eighties, a few courageous film theorists suggested that cognitive science might be a more productive path than the then pervasive, psychoanalytic/Marxist approach to film study.” (p.8)


Multi modal connections (500 words)

the human brain is primarily an image processor. - arguable

Studying those with inhibited senses is an effective way of acknowledging the complex interplay between sight and vision

“Deaf people raised on sign language apparently develop a special ability to read and structure rapid visual phenomena”...”raises the question whether the deaf mobilize the same regions at the centre of the brain as hearing people do for sound - one of the many phenomena that lead us to question received wisdom about distinctions between the categories of sound and image”

(page 12)

Within film “The viewer can be thought of as a standard biological audio/video processor”(page 13) processes the amalgamation of media as one. The ease at which we do this depends on the filmmakers intentions and how meticulously they blend the multi modal

Sound more than image has the ability to saturate and short-circuit our perception (page 33)

since early 2007, the Internet has seen a 9900 percent increase in the use of visualized information

‘visuals are more concrete to the brain than words, they’re easier to remember, as well.’

‘The visceral, emotional reactions that strong visuals can evoke are even quicker for our brains to process than emotion-neutral visuals — 13 milliseconds on average. Responding to sound is only slightly slower, at 146 milliseconds on average’

the tasks involved in processing and enjoying music are distributed across several brain areas.”

one study asked students to remember many groups of three words each, such as dog, bike, and street. Students who tried to remember the words by repeating them over and over again did poorly on recall. In comparison, students who made the effort to make visual associations with the three words, such as imagining a dog riding a bike down the street, had significantly better recall.

visual information—including the performer's sex, attractiveness, movement, and so on— could confound listeners' ability to make judgments of the quality of the music being performed. 

“We persist in ignoring how the soundtrack has modified perception”

“Each perception remains nicely in it’s own compartment” - as an audience, we never consider the interplay between the senses as sound and image are usually combined seamlessly

“Identifying so called redundancy between the two domains and debating interrelations between forces”...”Which is more important, sound or image?”

The image projects onto them meaning they do not have at all by themselves

Sound more than image has the ability to saturate and short-circuit our perception (page 33)


Application to film and music (1000 words)

electronic music performances are often sensor or laptop-based, which are not always visible to the public and whose usage does not require big gestures and actions from the performer. 

Any musician that reaches audiences of a certain size will eventually face the question of A/V accompaniment, regardless of whether visual presentation has been central to their work or not.

an unclear cognitive link between the sound and the performer’s actions

The visuals are what the viewer tends to mostly focus on and the sound subconsciously alters how the visuals are perceived.”

Commentary on Bergman’s Persona - cutting sound from one scene sees a succession of 3 shots - “the entire sequence has lost its rhythm and unity”

“Sounds we didn’t especially hear when there was only sound emerge from the image like dialogue balloons in comics.”

(page 4)

Audiovisual Illusion”

“Informative value with which a sound enriches a given image so as to create a definite impression”

Added value - Bergman “(eminently incorrect) impression that sound is unnecessary, that sound merely duplicates a meaning which in reality it brings about”

Image synchronism - (sounds happening in time on screen, blows, explosions etc.)

(page 5)

“Voice that is isolated in the sound mix like a solo instrument”...”sounds (music and noise) are merely an accompaniment.)

“You will first seek the meaning of the words, moving on to interpret the other sounds only when your interest in meaning has been satisfied”

(page 7)

“Music can directly express its participation in the feeling of a scene, by taking on the scene’s rhythm, tone, and phrasing”

“On the other hand, music can also exhibit conspicuous indifference to the situation, by progressing in a steady, undaunted, and ineluctable manner”

(page 8)





Electronic - Kraftwerk to the Chemical  Brothers (1000 words)


Practical (1000 words)

 I love to have the visuals connected to the music, synchronized and tight. In the best case, it should represent the sound on the visual level. 

While it isn’t subtle, spectacle is effective, stunning an audience’s senses and leaving a big impression. Often, video isn’t even necessary, as lights, especially strobes, can accomplish this on their own. And as lights get more technologically advanced, it becomes easier to do interesting things with them.

How clear is the relationship between the performer’s actions and, in our particular case, the audiovisual result?”.

“Sound superimposed onto image is capable of directing our attention to a particular visual trajectory.”

(page 11



Conclusion (500 words)


Projector test

As I was unsure that projections would come up on black foam board, I conducted a test on the model house to gauge how effectively lines and colours come out. The projections were clear and looked vibrant when using bright colours. As the animations shown were just on my showreel, More bespoke animations will follow which fit to the perspective of the building.



Projection mapping measurements

Planning measurements for my model was essential for the construction and mapping of projection. As each dimension will be outlined with using the projector, each window needs to be exact. Included is also a rough storyboard demonstrating animated clips which will show up on the dimensions of the building. 
Using these dimensions should make for a smooth projection mapping project with the assistance of After Effects Mercury transmit tool...

I will need to create a measured net of dimensions for the roof and stairs.



Multimodality in Cinema

 Multimodality in Cinema


Since silent cinema was formed, the concept of multimodality in film has grown exponentially. Starting quite simply with the application of music to assist in the overall tone and emphasis of on screen emotions and ranging to nowadays, where sound design is an integral part of the way we perceive it. Ranging from minute details to broad soundscapes, filmmakers choose to utilise sound as a way to engage audiences. Despite a number of agendas behind film surfacing overtime, Joseph Anderson in The Reality of Illusion stated that by the mid eighties,”a few courageous film theorists suggested that cognitive science might be a more productive path than the then pervasive psychoanalytic/marxist approach to film study”. In relation to my exploration of audiovisual illusion, one can see that the amalgamation of media being used on the same timeline can create a number of effects. This is all due to our cognition of different sensory stimuli being used simultaneously. Filmmakers and theorists have tapped into this, enabling them to use sound as a tool to affect the way we perceive moving images.


A blog about the psychology of sound and image notes the work of Swedish researchers Johnny Wingstedt, Sture Brandstrcm and Jan Berg. They discuss leitmotifs within film, specifically in Jaws (1975) wherein the infamous repetition of two notes being repeated among other tension building chords announces the arrival of the “eponymous” shark without even seeing it. This demonstrates how through an audio motif, occurrences offscreen can be hinted at. Not only this but within Jaws the researchers demonstrated how the now-famous notes come not only to signal “here is the shark” but to cause a rising sense of discomfort and danger in the audience.” Such methods are now common particularly in the genre of horror because by using this technique, events can be foreshadowed and tension can be built due to the properties of sound being as good at foreshadowing as visual cues are.


In my observations, I have seen how contemporary films demonstrate similar motifs to signal unseen or offscreen entities. Ari Aster’s Hereditary (2018) demonstrates a similar technique to Jaws, using a spoken word motif to signal one of the characters who dies early on in the film. With the clicking noise the character makes with her mouth being established as a motif early on in the film, manifestations of the sound as the film progresses creates an ever more foreboding atmosphere and sense of terror because it takes on a spectral aspect where, with the knowledge of the character in question being deceased, the sound will unnerve audiences with every new iteration. In relation to my exploration of music, a similarity can be drawn through the multimodality listening to music alone perveys to the rest of the senses. Similar to the anticipation of visual cues when hearing a certain sound. The chords within music can create neural pathways with the other senses. For example, in 2016, a University study in the US was undertaken with the intention of explaining the peculiar spinal tingle some receive from audio. DTI Brain scans were carried out on twenty participants, all of whom were listening to their favourite music. A DTI scan tests neural pathways between different sectors of the brain. The results shown were that the half of the participants who previously stated that they feel these musical “chills” had stronger neural connections: “The brains of people who felt the chills had more nerve fibres running from the auditory cortex, needed for basic hearing ability, to two other regions, namely the anterior insular cortex, involved in processing feelings, and the medial prefrontal cortex, which is thought to monitor emotions and assign values to them.” Within such studies, one can notice the potential to blend sound and image in multiple ways, varying in which medium takes more prominence. Where there are ways in which to harness this to collect conclusive data, the question of the complexities of cognition are extremely broad. As a comment in the article questions, “To what extent is a neurological description an explanation of first person singular emotional experience at the mentalistic level? Incommensurable I'd say. The musical experience remains mystically and wonderfully positive and inexplicable.” This addresses cognition in a holistic sense. With the pathways to understanding it being extremely complex, it is important to consider specifically how the theorem I am studying can explore a handful of controlled approaches to our cognition when accompanying visuals to music. Another study into the science of musical “chills”. The study concluded that around half a group of people experience this and that it stems from anticipation of beats within a tune. In a practical sense, I believe that accompanying visuals to the complex forms and individual beats or leitmotifs can boost engagement because music alone triggers our neural multimodality. Triggering other sensory stimuli in conjunction with this would only heighten the experience. 


Parallels between music and film manifest in its trajectory. Imagining sound and image running on a timeline simultaneously gives it a progressive sense. However, film contains a narrative for audiences to grasp onto. When music takes prominence, the performer will always eventually land at visual accompaniment because applying a seamless filmic narrative will direct an audience’s attention to the visuals while complimenting the music in a similar way to film and soundtrack. Within Audiovision, Chion credits this, saying that sound effects projects onto them” (visuals) “meaning they do not have at all by themselves”. This can be harnessed in my practice by applying animation to a chosen soundscape. The result will be that “Sounds we didn’t especially hear when there was only sound emerge from the image like dialogue balloons in comics.” Through selecting theories from Chion and other cognitive research, I will discover how a number of approaches affect the way we perceive music. There are pieces of music which will compliment each approach. For example, using Skream’s electronic track Settled can test how we can perceive visuals when combined with clear leitmotifs. As it progresses, the repetition of chords will resonate progressively with each reiteration (much like the motifs within Jaws and Hereditary). I believe that combining timed visual cues will boost the stimulus by providing more information and interweaving the senses.






Electronic: From Kraftwerk to the Chemical Brothers - report

Electronic: From Kraftwerk to the Chemical Brothers - report


Before visiting the exhibition, I had selected a handful of Chion’s theories about audiovisual symbiosis within film. With these in mind I was able to apply such theory to the fusion of art, design and music represented at the show because sound superimposed over film has ties with visuals over music; the only difference being that between the two cognitive pathways there is not a 1:1 correlation. 

Chion argues that there are three aspects to consider when looking at temporalisation in film, the first approach being tightly synchronous sounds. Within the Electronic exhibition the best example of this was in any of their A/V demonstrations. When watching a Kraftwerk live performance, I noticed that the futuristic calculus and space-age words being projected behind the four performers matched tightly with the music with digital precision, Initially counting to four with each individual beat and spanning to morse code and a selection of random words. There was a clear correlation between the music and the visuals because each visual cue was matched perfectly with a specific point within the music.



A/V visuals are common within musical performance as they can be programmed synonymous with sound. This approach makes sense because, as is the nature of all music, one is able to break a symphony down to individual notes and adding visual stimulus in time with these notes is satisfying for an audience watching them. In an article with Ableton, it is described how A/v visuals “share a common geometric design language, which is almost always synched to the music.” Relating this back to Chion’s first theory, it represents the most basic of applications of sound within film. For example this would manifest when the image shows an explosion, which naturally would be paired with a loud bang. This approach is still useful to my practical work however, as part of the visual stimulus I display can match individual beats, and can do so in more diverse ways than film because of the multitude of audio cues within the electronic music I will explore.


The second theory applies a different and more illusive theory to audiovisual fusion, wherein we look at sound “directing our attention to a particular visual trajectory” and “temporal linearisation”. When a film plays, it is essentially a continuum of sound and image running at the same time working hand in hand to assist in the overall feel of the film. A succession of three shots could be linked together by the sound of footsteps or breathing (anything to give the shots a sense of succession). Many filmmakers will use such techniques to give a scene a sense of momentum. Often this theory can be manipulated as an illusion for an audience, with actions happening offscreen being cued in using audio. Simplified versions of this theory manifest in silent film, where music was the only audio stimulus to accompany footage and tie it together in a linear fashion. At the Electronic show, this theory interested me greatly as I observed a few of the exhibits. The first of which being The Square Cube, 2007, a small scale replica of 1024 Architecure’s projection mapping piece for the electronic artist Etienne de Crecy’s Beats ‘n’ Cubes tour. As I watched the various tessellations and folds within this light demonstration, I took audio stimulus from two different sources; the first being Laurent Guarnier’s dance set, which was the background music to the entire exhibition. As I viewed and listened to the two unrelated stimuli, I was stunned as it seemed each movement the cube made had some correlation with the music, which I theorise as the brain making cognitive links between the minute notes within the music and the transformation of light in the installation piece. The second way in which I observed the piece was by plugging my 9mm headphone jack into the piece which played the music the cube was accompanied with. It astounded me to find little difference between how I viewed the cube in relation to music designed to be in time with the visuals, and completely separate audio. I believe this relates with Chion’s theory because the temporal linearage of the sound and image which do not correlate are connected by the viewer as a processing centre. This is a theory I will also test in my work by creating visuals which differ from tightly timed A/V animation and experiment with ways we can process temporalisation within audiovisual performance, essentially testing our cognition in a looser way.

Introduction

Introduction


Sound and image have been media which have been cross-integrated for centuries with a range of methods. My investigation will delve into the theory behind the multi-modal within filmic theory and how this has manifested overtime into live musical performances. Similarities can be drawn between cinema and music since the moment it became a multi-modal experience. Although the information received are entirely different and can be viewed in a range of ways, the viewer as a “standard biological audio/video processor” takes in audiovisual content channelling them towards a form of emotional response, whether it demonstrates moving image predominantly or contrarily accompaniment of visual stimuli to music. As stated in The Reality of Illusion - An ecological approach to cognitive film theory, “The motion picture can be thought of as a program. And it is more precisely a program than either a language or a mere set of stimuli. It is a very complex set of instructions utilising images, actions and sounds, a string of commands to attend to this now, in this light, from this angle, at this distance, and so forth”. The essence of my research entailed having a number of approaches to cinema in mind while researching contemporary music performance.


In film, sound can be used as a variable aspect allowing for ambiguous interpretation of any visual stimulus. Charlie Batten of BFI wrote that  “visuals are what the viewer tends to mostly focus on and the sound subconsciously alters how the visuals are perceived”. In my investigation I intend to gauge the different emotional responses moving images can have over music, informed by the practice of the multi-modal in cinema. Certain theories within Michel Chion’s Audio Vision - Sound on Screen explore the interplay between sound and image within cinema. Audiovisual trajectory is a key theory explored in Chion’s studies, where he demonstrates a number of methods to give a piece a sense of continuity. It ranges from sound being used in an exact nature to give a definitive result to ways in which sound designers and filmmakers can create illusive effects when using sound and image in conjunction. With relation to music performance, visual content has been utilised both with digital precision and more loose narratives. The existence of media such as music videos and visuals stems from the predominance of visual over audio processing. Dr. Haig Kouyoumdjian of Psychology Today argues that people are more responsive to visual stimuli, supported by the 9900% increase in visual information on the internet alone. Thus, for performing musicians and their audiences, visual cues would in a way bridge a void and create more authentic responses. A study in sight over sound judgments in Virtuosi dance experimented into musicians and their ability to anticipate winning dancers with muted clips. The study concluded that paired with music, participants were able to identify the winning pieces and, In vice versa, it found that visual stimulus “could confound listeners' ability to make judgments of the quality of the music being performed”. Such data represents the complex interplay between all of the senses and justifies that within performative acts, stimulating audiences with multimodality is a sure way to boost their engagement in an act. This translates to live music because often a large audience cannot engage with a performer: To accompany the music and essentially fill a temporal gap, large spectacles of light, projection and gestural movement are applied to assist in dramatising a performance.

Tuesday, 10 November 2020

Statement of Intent

Statement of intent


Multi modal connection between sound and image with manifestations in music and film


The use of sound in conjunction with visual media has been used in a variety of ways for centuries as a means to add emphasis to performative acts. In film, sound can be used as a variable aspect allowing for ambiguous interpretation of any visual stimulus. Charlie Batten of BFI wrote that  “visuals are what the viewer tends to mostly focus on and the sound subconsciously alters how the visuals are perceived”. In my investigation I intend to gauge the different emotional responses sound can have particularly over music, but informed by cinematic sound designers. Furthermore, in recognition of the complexities of the brain as a processing centre for such things, I wish to look further afield at people who study connections (or lack of) between the senses and how this can be channelled in creative responses.


When regarding the pairing moving image with music, it is arguable that such practice is human nature. Dr. Haig Kouyoumdjian of Psychology Today argues that the brain is primarily an image processor. Thus, for performing musicians and their audiences, visual cues would in a way bridge a void and create more authentic responses. A study in sight over sound judgments in Virtuosi dance experimented into musicians and their ability to anticipate winning dancers with muted clips. The study concluded that paired with music, participants were able to identify the winning pieces and, In vice versa, it found that visual stimulus “could confound listeners' ability to make judgments of the quality of the music being performed”. Such data represents the complex interplay between all of the senses and justifies that within performative acts, stimulating audiences with multimodality is a sure way to boost their engagement in an act.


Incorporation of such stimulus is on the rise and ever expanding. The internet alone saw a 9900% increase in visualised information. In the field of experiential design for performance, an ongoing show at the Design museum represents the visual work surrounding Kraftwerk and the Chemical brothers. The motion graphics which feature at their shows could be an effective way of discovering how practitioners pair visuals and music. Such shows incorporate bespoke visual design and I want to discover if the lights and projections are made in time to the music or whether it allows for the viewers to make connections between the two using multi modal connection. This will branch into my practical research because I want to find a way to create a set of visuals which can accompany different music. With this I hope to show that with the cognitive connections sight and vision have, visual stimulus unrelated to the music can generate a range of emotional responses from audiences. 


My aim to continue nurturing my knowledge of this topic is to gather more research and data. From observing different approaches I hope this can inform my own understanding of the complex cognitive connections between sight and hearing. This should aid in my investigation because I will find more effective ways to demonstrate this with my practical and investigative piece.



 

Dissertation Draft 1 (with notes)

 Introduction


Zatero harvard referencing generator

Citethisforme



Sound and image have been media which have been cross-integrated for centuries with a range of methods. My investigation will delve into the theoryPractice?? behind the multi-modal within filmic theory and how this has manifested overtime into live musical performances. Similarities can be drawn between cinema and music with many of the studies into cognitive processing. Although the information received are entirely different, the viewer as a “standard biological audio/video processor” takes in audiovisual content channelling them towards a form of emotional response. As stated in The Reality of Illusion - An ecological approach to cognitive film theory, “The motion picture can be thought of as a program. And it is more precisely a program than either a language or a mere set of stimuli. It is a very complex set of instructions utilising images, actions and sounds, a string of commands to attend to this now, in this light, from this angle, at this distance, and so forth”. The essence of my research entailed having a number of approaches to cinema in mind while researching contemporary music performance. List chapters briefly and explain what practices I am exploring


The use of sound in conjunction with visual media has been used in a variety of ways for centuries as a means to add emphasis to performative acts. In film, sound can be used as a variable aspect allowing for ambiguous interpretation of any visual stimulus. Charlie Batten of BFI wrote that  “visuals are what the viewer tends to mostly focus on and the sound subconsciously alters how the visuals are perceived”. In my investigation I intend to gauge the different emotional responses sound can have over music, informed by filmic theory. Furthermore, in recognition of the complexities of the brain as a processing centre for such things, I wish to look further afield at people who study connections (or lack of) between the senses and how this can be channelled in creative responses.


When regarding the pairing moving image with music, it is arguable that such practice is human nature. Dr. Haig Kouyoumdjian of Psychology Today argues that the brain is primarily an image processor. Thus, for performing musicians and their audiences, visual cues would in a way bridge a void and create more authentic responses. A study in sight over sound judgments in Virtuosi dance experimented into musicians and their ability to anticipate winning dancers with muted clips. The study concluded that paired with music, participants were able to identify the winning pieces and, In vice versa, it found that visual stimulus “could confound listeners' ability to make judgments of the quality of the music being performed”. Such data represents the complex interplay between all of the senses and justifies that within performative acts, stimulating audiences with multimodality is a sure way to boost their engagement in an act. This translates to live music because often a large audience cannot engage with a performer: To accompany the music and essentially fill a temporal gap, large spectacles of light, projection and gestural movement are applied to assist in dramatising a performance.


Outline exact investigation, What topics will i research?

What question am I asking?





A brief History of sound and image in cinema Stick to relevant theory

Change title name


Behind the vision of all filmmakers there is an ulterior motive which through time has evolved the way we look at film. Nowadays these intentions are diverse because film is more artistic, different in form and meaning with every manifestation. In the early 1900’s however, film was valued in the way that it could transfer observations of the world we live in by commenting on a certain political ideology or human characteristic. For example, Andre BazinFind contradictory opinions to compare implied that film was a “photochemical record of reality”, where writers can “appropriate the language of psychoanalysis, Marxism, or any pop culture Earlier opinions less relevant to my questionmovement or special interest cause, without assuming responsibility for their theoretical imperatives. Such writers may claim adherence to no theory at all.” This essentially states that films were exclusively made as a commentary on a political outlook or an observation of society, and how film can transcribe these imperatives with minimal consequences because the filmmaker can be somewhat removed from the subject matter of the film. More selective quotes needed


Contrary to this early view on film, my explorations will take a more modern approach because my theory takes on a looser form and narrative. Therefore, more contemporary approaches will assist me in my investigation because they will be[ easier ]changeto apply to musical performance. Such approaches can be seen after the mid-eighties. This time saw an evolution in the way cinema was perceived as “a few courageous film theorists suggested that cognitive science might be a more productive path than the then pervasive, psychoanalytic/Marxist approach to film study.” Such theorists are referred to as “courageous” because they opposed the ideals of their predecessors. America for example was believed to “propagate concealed assumptions of Capitalism”Be more specific when using triangulation - home in on the question and have more relevant opposing opinions within its film industry which is apparent in much of the content we consume. However there is now additional intent behind film, where much more care is taken over the overall aestheticism and form, rather than the sole meaning being political undertones or something of a similar nature. This can be seen in the immense care that is taken over every aspect of cinema to translate an inventive look and message which are symbiotic with each other. Such approaches engage audiences more and are driven by elements such as sound design, which is seen as an integral part to the emotional drive behind a film. This extends to minute details of sound to entire soundtracks.

More solid evidence needed related to practice and opposing theories 


Electronic: From Kraftwerk to the Chemical Brothers - report

Why is it important that cinematic theory relates to this?

A/V opposes how it is important because it is digitally rendered meaning that cinematic theory is irrelevant

Weirdcore


Electronic: From Kraftwerk to the Chemical Brothers - report


Paragraph linking my observations in relation to title

Make sure it is clear how it aids my theory and practical


Before visiting the exhibition, I had selected a handful of Chion’s theories about audiovisual symbiosis within film. With these in mind I was able to apply such theory to the fusion of art, design and music represented at the show because sound superimposed over film has ties with visuals over music; the only difference being that between the two cognitive pathways there is not a 1:1 correlation. 

Chion argues that there are three aspects to consider when looking at temporalisation in film, the first approach being tightly synchronous sounds. Within the Electronic exhibition the best example of this was in any of their A/V demonstrations. When watching a Kraftwerk live performance, I noticed that the futuristic calculus and space-age words being projected behind the four performers matched tightly with the music with digital precision, Initially counting to four with each individual beat and spanning to morse code and a selection of random words. There was a clear correlation between the music and the visuals because each visual cue was matched perfectly with a specific point within the music.



A/V visuals are common within musical performance as they can be programmed synonymous with sound. This approach makes sense because, as is the nature of all music, one is able to break a symphony down to individual notes and adding visual stimulus in time with these notes is satisfying for an audience watching them. In an article with Ableton, it is described how A/v visuals “share a common geometric design language, which is almost always synched to the music.” Relating this back to Chion’s first theory, it represents the most basic of applications of sound within film. For example this would manifest when the image shows an explosion, which naturally would be paired with a loud bang. This approach is still useful to my practical work however, as part of the visual stimulus I display can match individual beats, and can do so in more diverse ways than film because of the multitude of audio cues within the electronic music I will explore.


The second theory applies a different and more elusive theory to audiovisual fusion, wherein we look at sound “directing our attention to a particular visual trajectory” and “temporal linearisation”. When a film plays, it is essentially a continuum of sound and image running at the same time working hand in hand to assist in the overall feel of the film. A succession of three shots could be linked together by the sound of footsteps or breathing (anything to give the shots a sense of succession). Many filmmakers will use such techniques to give a scene a sense of momentum. Often this theory can be manipulated as an illusion for an audience, with actions happening offscreen being cued in using audio. Simplified versions of this theory manifest in silent film, where music was the only audio stimulus to accompany footage and tie it together in a linear fashion. At the Electronic show, this theory interested me greatly as I observed a few of the exhibits. The first of which being The Square Cube, 2007, a small scale replica of 1024 Architecure’s projection mapping piece for the electronic artist Etienne de Crecy’s Beats ‘n’ Cubes tour. As I watched the various tessellations and folds within this light demonstration, I took audio stimulus from two different sources; the first being Laurent Guarnier’s dance set, which was the background music to the entire exhibition. As I viewed and listened to the two unrelated stimuli, I was stunned as it seemed each movement the cube made had some correlation with the music, which I theorise as the brain making cognitive links between the minute notes within the music and the transformation of light in the installation piece. The second way in which I observed the piece was by plugging my 9mm headphone jack into the piece which played the music the cube was accompanied with. It astounded me to find little difference between how I viewed the cube in relation to music designed to be in time with the visuals, and completely separate audio. I believe this relates with Chion’s theory because the temporal linearage of the sound and image which do not correlate are connected by the viewer as a processing centre. This is a theory I will also test in my work by creating visuals which differ from tightly timed A/V animation and experiment with ways we can process temporalization within audiovisual performance, essentially testing our cognition in a looser way.


Define the points of each section of the essay

Anchor in an informed opinion to which supports my theory

Find more sources agreeing and a few which disagree