Recently I traveled to Garmisch-Partenkirchen in the Alps in Bavaria (Germany) and attended a party in which a DJ played songs for hours.

There I witnessed a pattern common in parties: sometimes the DJ played songs that made almost everyone get up and dance, and other times he played songs that made everyone get back to their seats.

They were all famous and successful songs. The songs were not the problem. The problem was that the DJ lacked data to predict human response. He did not have a way to measure variables that would reveal the crowds’ emotional state, and thus allow him to predict if they were “in the mood” for some songs and “not in the mood” for others.

As consequence, sometimes he was successful and made everyone dance. Other times he missed it completely and had to quickly chance the tune.

It was a constant trial and error, involving audio stimuli and emotional response. For about five hours. With an immense margin of error.

People dancing
Parties: Emotional triggers and biochemical responses.

AI, Biotechnology and Music Composition

We have discussed many times here at LiveInnovation.org the use of artificial intelligence in order to generate music compositions. And I have also shared some of the studies I have been developing in this topic.

The discussion however, has always involved the capacity of algorithms to understand patterns of music compositions and create new songs. And that is because, so far, the main discussions in the industry and academia have focused mainly on the composition process. Not on predicting human emotional response.

However, when we listen to song, there are series of biochemical reactions happening in our brains, triggering us to “feel” in a particular way and express them through behaviors and expressions.

The behaviors and expressions, resultant from musical stimulation, may include smiles, singing along, raising arms, desire to dance and much more. Without these behavioral cues, it is difficult for others to judge an immediate acceptance towards a particular song.

However, this is all about to change.

Due to the exponential development of biotechnology, soon we will be able to identify human emotional reactions at a distance. Without the need of any explicit human behavior.

Recently during an extremely interesting talk at How to Academy, the successful author Yuval Noah Harari raised a very insightful point while being interviewed by Natalie Portman:

“What is art? Art is about inspiring human emotions. It can inspire sadness, fear, joy. And if all artists are playing on human emotional system, which is the human biochemical system, there is a chance that computers could become the best artists in the world”. 

You can watch the discussion on this particular topic by clicking on the video below:

And I couldn’t agree more.

Currently we have limited capability of tracking or having real-time data on human emotional responses.  But this will change rapidly with the exponential development of biotechnology.

If the main goal of music and art is to trigger human emotions, it will be extremely hard for any human musician to compete against the precision of algorithms, with the data fed by biotechnology innovation.

Artificial intelligence and biotechnology will become perfect music partners.

Together, they will successfully track and create an infinite amount of compositions capable of eliciting any desired human emotional response.

PS: One quick suggestion. Yuval is a fantastic author. In case you are interested in his thoughts on technology and societal development, I highly recommend his book “Homo Deus”.

It is a wonderful read and will give you insights that go way beyond music.


Applications of AI and Biotechnology as Music Composers

When the moment comes that we can actually track human emotional response at real-time, algorithms will be able to equally adapt real-time composition of music.

It could have many applications, for example:

  • Clubs: The scene described at the start of this article will hardly ever happen. DJs (if they still exist) will be able to choose songs and beats based on real-time tracking of crowd response. Will also be able to generate automatically breaks, drops, change in tones and volume in order to generate desired emotional and behavioral responses.
  • Concerts: Concerts may become a new form of experience for the crowd. Venues will need to explicitly inform if AI will be used to manipulate human emotions. In some occasions the audience will attend to be emotionally led (as we do in theme parks). In other contexts, they will explicitly go against it. And understanding when the intervention is wanted and unwanted will be very important.
  • Advertising: If emotional manipulation if the key term to describe the merging of AI and biotechnology, influencing reactions towards products and brands will most certainly be a common practice. Companies will be able to predict human response towards brands and products while developing commercials.
  • Public spaces: Music is often used for crowd management: to calm people during moments of conflict or to induce engagement, such as in football stadiums and arenas. In both scenarios, real-time emotional tracking will be of great relevance to achieve the predictive goals.

These are just a few examples. By understanding the logic behind the combination of AI and biotechnology, one can also quickly notice that the applications are almost as limitless.

Music will simply be another way of manipulating human emotion.


Final Thoughts

Emotional tracking and manipulation is a reality. Its use in music is simply one more context in which it will be applied.

Personally, these issues bring me great apprehension to consider a world so different from I am used to. And also to think that the days of purity and honesty in music composition are counted and soon will be gone.

And when they exist (a musician organically composing a song), it will be difficult for listeners to notice.