And have been this way since time immemorial, but its effectiveness as a tool for communicating meaning has always been in a bit of a blurry space.
On the surface, music has plenty of things in common with language, as opposed to other artforms like painting. It can be written and read via its own symbols, and there are sounds associated to them that can be reproduced vocally. The questions start to pop up when we talk about music as a way to say something specific, which it clearly has some difficulty doing without the use of lyrics, but it seems there is evidence that points to it being rather decent at communicating in broad strokes, sometimes even across cultures.
In a study conducted by the science department at Harvard University named Universality and Diversity in Human Song, we can see how multiple communication methods come together to convey a more complex message:
“Music is often assumed to be a human universal, emerging from an evolutionary adaptation specific to music and/or a by-product of adaptations for affect, language, motor control, and auditory perception. It exists in every society (both with and without words), varies more within than between societies, regularly supports certain types of behaviour, and has acoustic features that are systematically related to the goals and responses of singers and listeners”.
While this doesn’t necessarily solve the riddle, it’s clear that there are patterns in music that are interpreted in very similar ways by different listeners, especially within regions that share the same music traditions. One of the more common ones is the association of tempo and harmonic interaction to a certain mood. A slow piece of music in a major key instinctively conveys a serene and peaceful feeling, while the same piece in a minor key tells a more somber story. Fast pieces in major keys feel energetic, joyous and are an invitation to dance, while those in minor transmit a feeling of chaos or similar catastrophes.
There are many other cues that add even more information, like pitch, rhythm and the use of dissonance. Many of these are used heavily in conventional languages and are indispensable to communicate effectively, bringing music a bit closer to languagedom (Invented word of the day).
With all this in mind, it’s perfectly possible to sit through the more than four hours of Richard Wagner’s opera Tristan und Isolde without understanding a word of german, and have a pretty accurate idea of the mood of almost every scene just by listening to the music. Unfortunately, when the time comes to figure out the actual plot details, music runs out of steam. It seems that at the end of the day, music is closer to some kind of body language than to an actual spoken one. It can convey a wide array of emotions and moods, but it can only zoom in so far.
However, as bonkers as this may sound, that doesn’t mean that it can’t be used as a language if somebody decides to create one using the notation system in which music is written. After all, they are ultimately symbols that translate into sounds. Certain sequences of notes could mean certain things, and we could string them together to make phrases in the form of melodies, some of them beautiful, others not so much, depending on what we want to say. We could start in a specific key when referring to the past, the present, or the future, and so on. Perhaps it just takes someone brave (or crazy) enough to turn music into the true universal language.