You probably heard many times the following argument: Artificial Intelligence (AI) will take over non-appealing repetitive jobs that humans do not want to do. This way, humans will spend their time doing creative tasks, such as composing art and music.

Correct? Well, think again.

Since the late seventies, researchers have developed algorithms capable of compositing extraordinary music. One of the pioneers was David Cope, Professor of Music at the University of California (Santa Cruz, USA). David was responsible for the development of EMI – Experimental Musical Intelligence, one of the first software capable of composing original music through three objective steps: the decoding of previous compositions, the processing of data gathered from the previous songs to identify musical patterns and finally, the actual composition of new songs based on the musical patterns previously identified.

This way, EMI composed over 11,000 original songs, often “resembling” famous composers such as Bach, Mozart, Beethoven and others. Many of these compositions are available on YouTube, Spotify and other digital platforms.

Here is an example of a Bach style chorale composed by EMI.

Thus, music is simply another example (such as photography, data analytics, and transport) where algorithms have mastered a process that was once limited to and valued by humans.

Algorithms are used to compose songs through various forms. For example, by analyzing the most popular songs of a particular genre, of a particular artist, of a single album or a phase of an artist’s career.  However, there is no guarantee that songs composed artificially will be appealing for a listener. We still need human feedback to inform the algorithm if the song is good or not. Nevertheless, the more algorithms receive feedback from humans, the better they become as they never repeat patterns that were perceived as unpleasant. Furthermore, except for electronic music, AI composed songs still need to be recorded by humans. This will always assure a human influence to the artificial compositions.

Currently there is an extraordinary amount of stakeholders investing in AI to compose music. For example, AIVA (start-up from Luxembourg creating emotional compositions for soundtracks and commercials), Flow Machines (SONY CSL- Science Lab), Humtap, IBM Watson Music, Jukedeck, Chordpunch, Amper Music, Magenta (Google Brain), Brain.FM, Melodrive, Popgun and The Echo Nest.

All of these companies are fully devoted to creating and improving algorithms to artificially compose music for different contexts, reaching extraordinary results. In addition, given the exponential development of technology, artificially composed music will only improve.

For example, listen below to one hour (Yes, one HOUR) of artificially composed music produced by AIVA:


Current Research Findings

In the studies I have conducted here at LiveInnovation.org with my students at IUBH University of Applied Sciences (Bad Honnef, Germany) the research results are quite clear. Overall respondents have a negative perception towards AI composed music, especially in contexts of high-involvement with music (e.g. Singer/Songwriters, acoustic music and bands). The acceptance is far greater in low-involvement contexts (e.g. commercials, soundtracks and public spaces). However, once I further investigated this perception with experiments the findings were quite different: once participants liked what they heard, it was irrelevant if the song was artificially composed. Furthermore, if used in advertising, the background of how the song was composed had absolutely no effect on how participants perceived the product or the brand.

In other words, it is acceptable for companies to use artificially composed music in advertising and to benefit from its clear economies of scale. On the other hand, it is simply a matter of time until the acceptance towards AI composed music increases in high-involvement contexts.


So How Will AI Impact the Future of the Music Industry?

Well, it depends through which angle you analyze.

From the composers standpoint, the future seems gloomy. So far there are no clear answers to authorship of artificially composed music and essentially anyone with an electronic device will become a composer. As always, once a technology masters a process, there is a devaluation of the task as it becomes accessible to everyone.

For performers, I believe not much will change. Humans will always admire others performing. The identification and admiration of human leaders and alpha characters is entrenched in our evolution. We will continue to be impressed by someone on a stage performing. However, given that anyone will be able to create original compositions, the number of performers is expected to increase substantially. Do not be surprised to see an even greater number of “music sensations” appearing on the charts, simply because they have an appealing/intriguing image and a catchy beat produced by algorithms.

Finally, from the listeners angle, we will live in a world of infinite music. New apps, sites and software will allow for new forms of musical experiences, customized compositions for people and moments. It will be easier than ever before for anyone to listen to sounds and songs that can trigger a desired emotional response. You will love it.

As Bob Dylan famously wrote: “The Times They are A-Changin’”.