Imagine a time when music fans will attend concerts performed by human musicians, but where all songs have been composed by artificial intelligence (AI).

This time has arrived.

Organized by Sony – Computer Science Laboratory (CSL), the concert took place in October 2016 in Paris (Gaîté Lyrique), and was part of the “Intense Science Festival“, which involved innovations in the fields of Music, artificial intelligence, agroecology and language. The concert included songs performed “human musicians”, but the songs had been developed through an artificial intelligence tool named “Flow Composer”.

Brief Explanation: The “Flow Machines” project is led by François Pachet on a partnership between Sony CSL and  Université Pierre et Marie Curie (UPMC).  Funded by the European Research Council, the partnership is slowly contributing and changing the music industry. One example is the development of the “Flow Composer”.

In short, “Flow Composer” analyses musical compositions to identify patterns (e.g. such as chord progressions, melody structures, arrangement and much more) and based on the analysis is capable of generating new songs (For more detailed information, visit their site).

Sony has made available on their official YouTube channel a few extracts from performances, displaying different ways in which music produced by AI can be applied in the near future.

Examples of Songs Performed During the AI Concert and the Rational of Composition

1. Analyzing an Artist’s Career

Brief description: In this example the AI tool Flow Composer (http://www.flow-machines.com) was use to analyze 45 songs by The Beatles, thus encompassing a great amount of work from the band. Consequently, in such examples the software has greater sample size to process, allowing potentially a more accurate precision in the process of understanding patterns and thus creating new material.

Moreover, the larger amount of songs allows also a comprehension of an artist’s changes throughout time. The Beatles, for example, became famous also for having gone through very distinct phases, such as during their beautiful album “Sgt. Pepper’s Lonely Hearts Club Band“.

  • Performed by: Benoît Carré
  • Song title: Daddy’s Car

 2. Analyzing an Artist’s Specific Album

Brief description: Another possible use of the software can be to understand patterns within a specific album of an artist. This was achieved in the song below, as resultant from the interpretation of the album “Rubber Soul” by The Beatles. Thus, in such applications musicians or fans will be able to create new original music based on a specific period of an artist rather than on most of their body of work, as in the previous example.

  • Performed by: Kumisolo 
  • Song title: Kagerou San

 3. Combining a Mix of Musical Influences

Brief description: If a band has the interest of creating new songs based on a specific combination of influences, it can also be achieved by Flow Composer. In the example below, one can hear the song which is resultant from a mix of “Yaton” by Beak, “Vitamin C” by Can, “Tresor” by Flavien Berger, “Supernature” by Cerrone and “Rain” by Tim Paris.

  • Performed by: Lescop
  • Song title: A Mon Sujet

 

4. Analyzing a Specific Genre or Style

Brief description: Another way of music creation through AI is to deeply understand a music genre or style and afterwards generate songs that will resemble the style with great precision. In the example below, over 452 American standards were analyzed the AI tool Flow Composer for the “Shadow” project.

  • Performed by: Benoît Carré
  • Song title: Où est passée l’ombre

5. Analyzing One’s Own Style

Brief description: After having produced a few songs, an artist or band develops intuitively their own style of composition. However, with great pressure from the industry or simply lack of inspiration, one may hit a writers’ block and simply become unable of generating new songs. And AI might be an alternative for unlocking suck difficulties. In the example below, the software generated a new piece through the analysis of 16 songs by ALB, a French electropop musician, and was performed by the musician himself.

  • Performed by: ALB
  • Song title: Didadooda

Brief description: Similar to ALB, Housse de Racket performed a new song generated by Flow Composer based on their own songs. However, in this case only 6 songs by Housse de Racket were analyzed, revealing the capacity of the software of creating quality pieces, despite working with smaller data sets.

  • Performed by: Housse de Racket
  • Song title: Futura

6. Real-Time Composition and Interpretation

Brief description: For music fans interested in improvisation and innovation, AI music may also represent the opportunity of watching musicians interpreting original music pieces, as they are composed live in real-time by the software. As the software produces the composition, the musicians have the challenge of understanding and interpreting it at the same time while under the pressure of time and of an audience.

  • Performed by: Camille Bertault and Fady Farah

Songs Will Only Improve

One of the main characteristics of AI is the ability to learn from the process that it is involved in. In other words, based on the feedback of musicians and listeners, the software can quickly identify patterns of creation that have led to songs that were “acceptable” and the ones that “were not acceptable”.

Consequently, with time, it becomes able of mostly generating songs that are perceived positively by a larger audience. So in case some of the compositions above did not appeal to you or did not resemble clearly the artists that it analyzed, simply give the “Flow Composer” time. It will certainly improve with time.

“Human Bias”

Another important factor on the acceptance of AI music is the “human bias” on the interpretation a new musical piece. One should not judge the quality of an AI song based purely on the interpretation of one single artist.

In essence all versions shown above are “covers”, as none of the musicians have actually composed the songs. As one should know, every song allows multiple interpretations, thus permitting an immense “human bias” for a song to be seen positively or negatively.

Thus, in order to better judge the compositions above it would be interesting to have a greater pool of musicians, with different styles and instrumentation, performing a same AI composition. It would certainly allow a more comprehensive understanding of the composition.

Future Research Questions

Looking into the future, there are numerous research questions that should be addressed. Here are some examples:

  • If listeners were aware, would they be interested in attending concerts where all songs were created by AI?
  • Will listeners value less musicians that have used software as part of their song composition process?
  • Should concerts where purely AI songs are performed be cheaper?
  • Will the music industry explicitly reveal when songs are generated by AI or will it hide the fact from listeners?
  • For decades, record labels have used paid songwriters to compose songs for signed artists, ranging from rock bands to popular “boy bands”. Despite being a highly successful practice, essentially it is any different from adopting a software for music composition?

In the near future, we at LiveInnovation.org, will address some of these questions as part of our research projects.

So stay tuned and watch this space.