Acousmatic Generations: how recorded music changed the world (and music composition).
While on one of our journeys to Bologna’s Conservatory of Music, my life-long friend and colleague Andrea Mazzotti told me something that would stick in my mind for the following days:
“We are part of an acousmatic generation”.
What is acousmatic music?
Acousmatic music (from Greek ἄκουσμα akousma, "a thing heard") is a form of
electroacoustic music that is specifically composed for presentation using
speakers, as opposed to a live performance.
Practically speaking, it’s music played back on a music playback device.
In 1876, famous inventor and entrepreneur Thomas Alva Edison invented the Phonograph, a device capable of recording and writing music on a wax cylinder. For the first time, any sound could be reproduced identical to how it was first heard, although with some initial limitations and quality issues due to the support the sound was recorded on.
Carolyn Birdsall, an Associate Professor on Media Studies at the University of
Amsterdam, defined the new-found sensible experience “coherent with the
general conditions of modernity that redefined the temporal and spatial
relationships in society and individual perception”:
The possibility to record and play sounds implied a change in auditory
sensibility in a similar way as how photography and cinematography changed
the perspectives on visual habits, creating a new way to approach what can be
seen.
The phonograph was, however, invented as a practical mean to record the
human voice for purely utilitarian reasons, such as the transcription of
speeches. What truly changed the way we approached music, was the invention
of the Gramophone by Emile Berliner, in 1887.
Differently from Edison’s phonograph, the gramophone was developed
specifically to record and play music, this time not on a wax cylinder, but on an
ebonite disk, the first iteration of the famous Vinyl Records.
The gramophone was highly criticized, especially from musicians and
musicologists, which were worried that the ability of the device to record and
play music was itself so astonishingly incredible that people would be distracted
by the new experience, and would not be able to concentrate on music itself.
Ultimately, the invention of the gramophone and the vinyl records resulted in
one of the biggest markets of 20th century: the commercial music market.
This invention changed the listening paradigm in ways that are still evolving:
The listening and fruition practices changed, becoming a new form of
entertainment and consumerism of industrial products. With interesting results,
who once had to learn to play an instrument to privately listen to a song, now
just had to buy a gramophone and some vinyls to be able to listen to it
effortlessly, creating a new category of music enthusiasts defined not by their
ability on an instrument, but on their private catalog of music they listened to.
At the same time, the most fertile artistic minds started to see the new
incredible perspective that the ability to record and play sounds represented.
In France, at the Studio d’Essai, in 1948 french composer Pierre Schaeffer composed the “Five Studies of Noises”, the first example of what would later become “Concrete Music”, a new way to compose using only recorded sounds.
Schaeffer later defined the main concepts of Concrete Music in his treaty “In search of a Concrete Music” of 1952, where he explained how sounds should be chosen based on their objective characteristics (timbre, inherent rhythm, morphological characteristics), and how the listener would have had to change his perspective on active listening by concentrating on the Sound Object’s characteristics instead on his origin or meaning.
These simple rules changed how 1900’s classical music would be composed, and gave birth to many new artistic currents, ranging from the Electronic Music of the WDR studies in Colonia, which opposed this current by making music with only synthesized sounds, to Italy’s Mixed Music, which combined the French and German currents.
On the other side of the music medal, however, the invention of music
recording and reproduction led to the mass diffusion of music, and the
subsequent creation of Discographic Labels and the diffusion of an incredibly
varied landscape of genres.
People could now discover new music, appreciate the variety of art currents,
and fully immerse themselves in the wonderful universe that music constitutes.
But how did the diffusion of music change how composers work?
From a technological perspective, the evolution of music recording equipment,
of synthesizers (both analog and digital), and the miniaturization process in the
field of electronics, granted composers some half-a-century worth of new
technologies to use in their artistic endeavors.
Composers could, with the invention of the so ware “Music N” by Max
Mathews in the late 1950s and its future iterations like “C-Sound”, compose new
algorithmic music, synthesize real sounds with complex waveforms and
discover an infinite panorama of sounds to use in their music; and with the
evolution of sample synthesizers in the 1990s, could even achieve realistic
results in recreating orchestral timbres.
All of this came to be really useful to 20th century movie composers, but
ultimately changed how the future generations of composers wrote and thought
music:
The invention of VSTs and the evolution of the Cinematic Music Genre in the
early 2000s set a landmark in the Film Music field, and inspired thousands of
young composers in the pursuit of their dreams, but represented also an
important change in the composition paradigm.
What I’m going to say, from now on, is a really personal perspective on the modern composition landscape, and may be a little bit extreme in some cases.
The habits of listening to music made with sampled instruments and later mixed to achieve a really particular sound made it so that modern composers of the acousmatic generations usually are oblivious of some practical acoustic implications of orchestration, and compose “for the machine”.
Composers of my age grew up listening to the incredible musics of John Williams, Jerry Goldsmith, Alan Silvestri, Alfred Newman and many others; we had great examples of composition and orchestration geniuses, yet, we fall short when it comes to music writing.
In the middle years of the 1990s, german composer Hans Zimmer became
world famous thanks to his soundtracks for The Lion King, and would later
completely change the panorama of Movie Soundtracks in the 2000s thanks to
his music for The Gladiator, DaVinci Code, The Dark Knight, Pirates of the
Caribbean, and many other movies.
For how much I can love Hans Zimmer’s music, it’s to me undiscussed how
much his influence shaped the compositional practices of all the young
composers that would grow up listening to his music, and unfortunately not in a
good way.
Although Zimmer’s music is wonderfully cra ed, it can be defined as a “Pop”
sound adapted on the orchestra, and is defined by the same “simplistic” rules of
commercial pop music.
That’s not obviously to say that there isn’t complex pop music. We have
magistral works by artists such as Elton John, David Bowie, Queen, Pink Floyd
and many other artists, but it’s undeniable how many early-2000s pop works
felt the influence of the change in technology in music production, with the
intuitivity and simplicity that sampling brought to the music production scene,
for example in rap and hip-hop music, which is largely built on the
manipulation of music samples, resulting in simple yet charming music.
Yet, a trend began: a trend made of percussion-driven epic music and
strictly-ruled orchestrations, with sweet lyrical strings and strong pulsating brass
sections.
The sound of Hans Zimmer is a sound made by rhythmical music for action
scenes, and soaring romantic melodies for love scenes, a sound which instantly
became a hit:
Every composer, a er The Dark Knight, Pirates of the Caribbean and Inception,
nowwanted to be a little Hans.
Aside from the simplicity on which Zimmer built a music empire, made of
simple yet idiomatic harmonic structures and melodies, there is another factor
that changed the composition paradigm in the 2000s:
The evolution of Virtual Instruments.
In the late 1990s and through the 2000s companies like Vienna Symphonic Library and East West started developing their own Virtual Instruments libraries, which were a big step forward from Roland’s SRXs and other Romplers, hardware machines built on sampling technologies, and which granted incredible “realism” in the playback.
Like every company, VSL, East West, and later Spitfire Audio and Orchestral Tools started developing these new instruments following the trend of how music sounded in these years. A er all, they had to appeal to composers who were asked by directors to write music like “that famous movie with the soundtrack by Zimmer”.
Long story short, a vicious circle ensued, in which the trend was Zimmer’s “Pop Orchestra” style, some companies like Two Steps From Hell started composing whole albums of trailer music similar to Zimmer’s compositional manner, the new Epic Cinematic Music genre was born, and So ware Houses instructed their orchestral players to record only the techniques which were largely used (namely, the most simple one, like legato, sustained notes, staccato, pizzicato and harmonics).
Young composers who were entering the music scene in these years didn’t really have to write complex orchestration a la Debussy, or even with mahlerian influences like John Williams, they just had to use those basic techniques, and make good poporchestra music.
The vicious circle continues, as now those composers didn’t really need to know how to orchestrate complex timbres, they weren’t asked to, and didn’t really need to know how real acoustic instruments played and sounded like, since VSTs were much moreconvenient money-wise than hiring full orchestras.
The large diffusion of VST music changed the landscape of how music sounded: If John Williams used 8 french horns, he had to complement them with an enormous string section to balance the dynamics in the orchestra, resulting in a lush timbre in the strings, but now with VSTs composers could achieve the clarity and shininess of the smaller string sections of chamber orchestras, but paired with the power of the 8 french horns thanks to the now ultra-advanced mixing process’ possibilities.
This led to some funny, sad, results:
Quite some time ago I was watching the music tutorials of a YouTube channel
sponsored by Berklee University and Orchestral Tools, “Virtual Orchestration”,
with young composer Alex Lamy as the conductor of the video podcast.
Alesson caught my attention:
“ADDING MORE LIBRARIES just sounds (bad)? Here’s WHY! Orchestral
balancing with CC7 + mic positions”.
In the video lesson, Alex Lamy explained how his brass section sounded way off
balance with the string section in a fortissimo passage.
The solution was to adjust the volume of the tracks (not the dynamics the
instruments played in), and change the microphone settings (balancing the
sound between a close-mid-far and room range of microphones).
There were quite a “few” problems beforehand, in reality, which went totally unaddressed:
First of all, the brass section was enormous.
A grand total of nine trumpets, twelve horns, six trombones, three cimbassi and
a tuba were playing, all at the max dynamic level permitted by the so ware (it
would be between a ff and a fff on the score), against the string section which
was doing a spiccato and staccato ostinato passage.
A brass section of this proportions would rival Wagner and Mahler’s biggest
orchestras, and would tear down a whole theater with its loudness.
The second problem is the nature of the libraries themselves:
The brass library, Tom Holkenborg’s Brass by Orchestral Tools, was specifically
made to capture in high detail every nuance of the brass section, from the
quietest ppp to the loudest fff, and presents various size for the sections (Solo
Trumpet, Trumpets a 3, Trumpets a 6, and so on).
The string library, Berlin Strings, on the other hand, presents a “full” 8 Violins I,
6 Violins II, 5 Violas, 5 Celli and 4 Double Basses.
Setting aside the fact that, historically, an ensemble of this size barely would be
f
ine for a chamber orchestra (1 flute, 1 oboe, 1 clarinet, 1 bassoon, 1 or 2 horns, 1
or 2 trumpets, and the strings), a section like that is very acoustically unbalanced
(according to Rimsky-Korsakoff’s Orchestration Manual, the string section
should be a 8-6-4-3-2 instead of a 8-6-5-5-4) as there are too many cellos and
basses, which would result in a low-frequency build-up.
Rationally, an enormous brass section would never sound good timbrically
against a string section barely made for a chamber orchestra, which is balanced
for a brass section 1/8 the size of the displayed one.
A 16-14-12-10-8 string section wouldn’t be enough for such a raw fire-power in
the brass section. You would need approximately 184 string players to balance
that brass section.
Of course, all is well and ends well when the composer Alex Lamy adjusts the volume of the tracks, and suddenly he has the timbre of twelve horns at the “right” volume.
Of course, the problem was the fact that “two libraries are recorded differently, and volume/microphones needs to be adjusted accordingly”, not that there is no orchestral balance in the number of instruments playing.
Now, I do not consider Alex Lamy to be a bad composer, I consider him to be part of a bigger cultural problem where a “composer” doesn’t know music history enough to be able to correctly define what a Sonata Form is without going internet-browsing and failing to provide correct information while doing so (“Tempo demystified: Why Beethoven’s Moonlight Sonata is ALWAYS PLAYED WRONG!”) and getsangry whenthe commentsection corrects him (in a rude manner, to be fair, but still, those are elementary level definitions that every musician knows by heart), which is the result of a too practical-oriented education system that doesn’t value tradition enough (and that probably doesn’t even know aforementioned traditions), and that concentrates too much on “we’re studying only western music!” (as if that wouldn’t be enough for an entire lifetime, they want to simplify all the music traditions of the world in a 3-years university course).
Still, this habit of ours to listen almost entirely in an acousmatic fashion results now in a complete obliviousness to how music really sounds in a spatial environment, and the over-saturation of VSTs music made it so that we’re not even used anymore to knowing how instruments play a passage (resulting in some hilarious phrases and articulation choices by young composers like Samuel Kim and other Cinematic Music artists).
As a classically trained composer, I actually find kind of insulting that colleagues
of mine calls themselves composers while they don’t even know basic music
theory notions (let alone more complex practices like counterpoint, or how to
balance harmonic tension in the macro and micro-structure of music pieces).
I usually like to use the term “composer” the person which writes music in an
artisanal way, with competence about music history, historical playing practices,
orchestration from early gregorian music up to contemporary music, and which
puts a true artistic effort in what he does (and doesn’t fall under the trap of
psycho-romanticism, but that’s a point for another article).
The person which instead writes music as a product, in a hyper-specialized
manner, is a producer, not a composer.
I classify Lamy as a great producer, one that’s very competent in writing for VSTs and in using Cubase to mix his works, but I would not call him a composer, as in someone who knows enough to be an expert in all-things-music composition.
I mean, if in a 8 years university course on music composition I’m to know how to perfectly write a baroque fugue, a 1500s motet, a 1800s pianistic piece, an opera and a symphony, all while doing in-depth exams on gregorian chant practices through the centuries, knowing also how to orchestrate elaborated sound samples a-la Schaeffer starting from a spectrogram, how can I consider someone which confuses movements with sections of a sonata form to be in my same professional category?
Acousmatic Generations composers, of which I’m a part of, now have lost the sensibility of real acoustic performances, and doesn’t know anymore how instruments sounds like in real scenarios (an example come to my mind, which I cannot find on YouTube probably because of how poorly the search algorithms works now, where a guest young composer of Native Instruments showcased a Solo Violin library playing 8 notes pizzicato chords like they were chords on a piano).
What wecandotostop this, is listening to live music concerts, working with real
in-the-flesh musicians, and stop playing back our music with the notation
so ware’s playback engine.
Also, I’d argue it’s fundamental knowing the history of our art like the palm of
our hand, analyzing music from every style and year to get a good glimpse on
howmusic really works.
I’m not advocating against a specific aesthetic, I’m the first that writes music Zimmer-style when work requires it, but I always do that with a particular attention to details, like if everything will be played by real people, in a real orchestra, by real musicians, and I sincerely believes this adds to the music.
Published 07/07/2024 by Luca Ricci. All Rights Reserved..