Issue #12: 🎻Harmonizing Innovation: AI In A Minor Event Recap — Jason Michael Perry

Howdy, we did it! Last week I stood in front of a packed crowd at the Meyerhoff in Baltimore and presented AI in A Minor, the first AI-composed concert performed by the wonderfully talented musicians of the Baltimore Symphony Orchestra. On stage, I was also joined by a brass quintet, a string quartet, my incredibly talented team of engineers, our CEO Todd Marks, and a panel of experts from many different disciplines, and everyone knocked it out of the park! Of course, like many major events, so many people were instrumental in making this event a reality. I can’t begin to say enough thanks to my team, our sponsors, and everyone who made last week such a huge success.🏆

The Baltimore Symphony played four pieces, and three of those pieces we composed using various AI tools. Attendees were first introduced to the original compositions that influenced our work, but mouths dropped as we transitioned from Mozart’s Eine Kleine to new AI-generated music, influenced by the original piece yet often impossible to discern from the original.

At Mindgrub, we’re massive supporters of open source and understand the importance of giving back to the same community that helped make things like “AI In A Minor” possible. So I thought for this week’s newsletter, I would share more about composing an AI classical concert, but first, here are a few thoughts on tech & things 🔗:

⚡️Even OpenAI can’t tell if AI writes your content. This is not a huge surprise. After all, training a computer to write like a human means AI-generated content will read like the human work it’s trained on. Detection will also get harder and harder as the quality of AI increases. The optimal solution involves adding a digital watermark to label AI-generated content, though implementing this is more challenging for text than for binary formats such as images, music, or videos.

⚡️EV charging operator ChargePoint hopes to prove that late is better than nothing. With its extensive charging network in North America, ChargePoint aims for nearly 100% uptime, a long-awaited commitment for EV users, especially considering past experiences with non-Tesla charging options. Have you had any luck with ChargePoint stations? Let me know in the comments 👇

No alt text provided for this image

Now let’s talk about AI, how it works, and how one may compose AI-generated music.

Artificial Intelligence goes by many names, including machine learning, natural language processing, and computer vision. Each of these are facets of AI to create computer technology capable of doing things only humans can. Self-driving cars use computer vision to recognize objects, natural language processing allows a computer to recognize text and speech, and chatbots make decisions on the information you provide it – all skills that once required a human being.

No alt text provided for this image

In addition to these facets, AI can be compartmentalized into two categories, narrow artificial intelligence or artificial general intelligence. Narrow AI is what you are most familiar with. This type of AI is typically specialized and taught information meant for a specific task or need. For example, autosuggestions on your phone or computer use AI to predict the following letter or word in a sentence, and it does this by analyzing the probability of the following letter based on books and textual content used to train that AI model. 

AGIs like OpenAI’s ChatGPT are trained on as much information as possible to create a well-rounded computer with broad knowledge but maybe not as deep. ChatGPT, or Google’s Bart, knows tons of information and can answer more questions or generate more concepts than a narrow AI could.

Ultimately, these AIs require a model trained from a data set and reinforced by validating the information it’s given. Suppose I want to create an AI that uses computer vision to identify animals. In that case, I first need to train this computer on the types of animals I want it to know and then test the machine to make sure it actually understands the data I give it. To keep things simple, I might decide that my AI should identify dogs, so to do this, I need to give it as much information on dogs as I can possibly find so the AI can begin to recognize the pattern of a dog. I can test that same AI by giving valid and invalid examples of dogs and reinforce its learning by quizzing it.

No alt text provided for this image

At some point, like a human, if the computer knows what a dog is, it should be able to sketch a dog, which is the basis of generative AI. For music, our models are trained using as much music as possible and given meta information such as the artist, genre, and tempo. This allows us to compare artists or ask a music model to generate something new that feels influenced by a particular artist or composer.

For AI in A Minor, we researched and explored many options, including building our own custom music model – but timing limited what we could do over, so we built our event on top of existing AI models.

👉 Riffusion was the first AI tool we encountered, using spectrograms or 2D representations of music to generate new music. Seeing that spectrograms are images, Riffusion uses a version of Stable Diffusion trained with tons of them and can use them to create new original pieces. As a showcase of what is possible, Riffusion is impressive. Still, we ran into two issues – it generates MP3 files that we could not easily break into instrument tracks to create sheet music, and the audio quality contained digital artifacts.

No alt text provided for this image

The folks behind Riffusion connected us with a group of professors and ex-Google employees behind a brilliant AI tool named Anticipation. We quickly discovered that Anticipation was one of the most significant AI music tools we had encountered. Unlike Riffusion, Anticipation worked with MIDI files and allowed us to separate instrument tracks to generate our sheet music easily; it also allowed us to give it parts of an original piece of music and ask it to complete that piece, something that we used over a loop of iterations to create a new AI composed Mozart piece.

🎧 Listen here: https://soundcloud.com/mindgrub/sets/ai-in-a-minor

👉The last AI tool we used was a commercial product out of the UK named AIVA, which is phenomenal and comes with a well-designed desktop app and web interface. AIVA allowed us to create generation profiles that ranged from the type of music to the feeling we wanted the music to convey and an influence music asset as a starter. I’m still impressed with the speed AIVA provided, but we quickly learned that it excelled at generating pop music, which is much more repetitive than most classical pieces. In the classical space, music from Baltimore composer Phillip Glass worked well in AIVA, but other influences could miss the mark.

🎧 Listen here: https://soundcloud.com/mindgrub/sets/ai-in-a-minor

👉 We’re working on a much more technical write-up for those looking for a more technical deep dive or how-to. Still, in the meantime, the Mindgrub team packaged the🔗AWS instances and all the dependencies into a marketplace instance, making it easier for a developer to spin up Riffusion and Anticipation and explore how to use them.

-Jason

No alt text provided for this image

P.S. I thought converting a MIDI file into sheet music would be the easiest part of this project, but it caused the most last-minute pain. We used a tool named Lilly Pond that worked perfectly, but it also illustrated that AI-generated music could push the bounds of what an instrument is capable of or the typical flow of music. Of course, this is not an issue with Lilly Pond or sheet music but with the AI tools.

To help, we enlisted a fantastic composer named Christopher Enloe to help us learn the bounds the music needed and to tell which generated music was genuinely playable.