The great Italian sculptor Michelangelo has allegedly once said, “Every block of stone has a statue inside and it is the task of the sculptor to discover it”. We have always believed that only humans held the incredible skill of imagining and creating art, as if it represented a form of self-expression unique to our own specie. However, this perspective has changed.

Throughout history, humans have permanently pursued to optimize processes and solve problems more efficiently. The main thread that connects our changes from hunter-gatherers to the agricultural revolution, from to the industrial revolution to the digital and now the autonomous era, is the arrival of new technologies to solve former and new problems. So along this process kept developing more sophisticated solutions, such as bow and arrows, irrigation systems, factory machines and, later, software. But what would the ultimate solution be?

Well, the automation of processes through “intelligent technologies” has long been perceived as the ultimate optimization. After all, there seemed to be no better solution to solving problems than by developing “something” that could think, decide and act by itself. Importantly, with little or no human input, which would reduce immensely the margin of errors. These premises and, first attempts, have actually started many decades ago.

In 1950, Allan Touring suggested the touring test. In 1951 Christopher Strachey developed the first “AI”, although only in 1956 John McCarthy, during an academic conference Dartmouth College, used for the first time the term “artificial intelligence”. In 1959, MIT launched their AI lab and in 1974, the Stanford AI lab produced the first autonomous vehicle. As you see, the idea of automating processes is not new.

Many decades have gone by since then, and as with other technologies in history, AI has been implemented in various contexts other to the ones they were initially designed for. For example, it has reached creative sectors. Such as music.


AI and Music Composition

In case you are not a musician, let me tell you a secret: behind every song or melody you hear, there is a mathematical reasoning. The Greek philosopher and mathematician Pythagoras discovered the mathematical relationship of tones and ratios in music and soundwaves, and set the basis for music theory and harmony in western countries. No wonder Pythagoras is often called the “father of numbers”, but also the “father of harmony”.

And what does AI have to do with it? Well, one of the most suitable tasks for artificial intelligence is to analyze extremely large amounts of data and identify patterns. Given that mathematical basis of music theory, AI can also be used for this purpose.

Take David Cope, for example. In 1977, he started working at the University of California and a few years later faced a writer’s block. He had been hired to compose a classical piece but simply could not have interesting ideas. In face of the professional pressure, he sought an innovative solution. He combined his knowledge of music theory and programming, to create what later became EMI: Experimental Music Intelligence.

On a nutshell, EMI was a tool that composed music through three essential steps: First,Deconstruction”: analyzing music compositions and separating them into parts. Second, “Signatures”: Identifying commonalities, which signifies and characterizes a style of a genre or composer. Lastly, “Compatibility”: recombining pieces, patterns and styles to create new original works. Through this process, EMI composed thousands of songs.

Decades after EMI, technology in general (software and hardware) improved greatly. As consequence, multiple companies have developed AI based solutions for music compositions. For examples, AIVA, Jukedeck, Humtap, Endel, Amper, Brain.FM, Melodrive and Popgun. All these tools have helped listeners, musicians, composers and producers to co-create with AI or simply listen to artificially composed music.


Do Listeners Enjoy It?

But if music and art is such an intrinsic form of human expression, how do music listeners react to artificially composed songs? Recently I published a paper titled “Artificial intelligence became Beethoven: how do listeners and music professionals perceive artificially composed music?” at the Journal of Consumer Marketing, which tackles this question.

The paper shows that, in general, the perception of using artificial intelligence to compose music is rather negative. After all, listeners admire the ability of artists to express sincere human emotions through songs. However, during an experiment, we manipulated the description of how the songs had been composed. For one group, we told them a fictitious story about the emotional reasoning behind the composition. For another group, we told them AI had autonomously composed it.

Main findings? The fact that AI had been used during the composition process had no effect, as long as participants enjoyed what they heard. This finding is actually not as surprising as it may seem. Although one may not appreciate the process through which an artistic output has been created, in the case of music, it is challenging for us to react negatively if a song triggers on us an involuntary positive emotional response.


What Does the Future Hold?

We must expected an exponential increase of artificially composed music. Whether for meditation, music therapy, soundtracks or gaming and series, jingles or for pure hedonistic enjoyment, AI will be used in recording studios. Why? It facilitates processes and provides an extraordinary economy of scale.

The challenge is that it will be difficult to judge the intention and ability of artists. Audiences will be left guessing. Once a song or album is released, they will have to rely solely on the artists’ word to know if the track is an honest human expression of emotion and technical ability, or if it was co-created with AI or autonomously generated.

For me, a fan of “organic music” and artists such as The Beatles and Bob Dylan, this represents an immense shame. But for a new generation, born during the autonomous era, artificially composed or co-created music might simply represent a new form of creativity.

I just hope that, in the coming years, there will still be others like me still interested in listening to songs that someone composed alone in a bedroom, with an instrument, during a moment of sadness or joy.

After all, at least so far, algorithms cannot feel emotions.