As artists and mentors, it is crucial for us to acknowledge that Artificial Intelligence (AI) is here to stay. While it has the power to create, it lacks the essence of true creativity and probably always will. AI generates art without any thoughts, emotions, or bias; it simply produces requested information. Understanding this distinction is imperative in our evolving artistic landscape.

Over an eight-month period in 2022, a few colleagues and I embarked on an intriguing experiment with Soundraw, a Japanese AI music stem generator. The goal of our work was to test and eventually contribute to a long-term understanding of AI’s role in the realms of art, design, and media. This experiment, which culminated in the digital release of an album entitled “New Oddities,” stands as one of the pioneering ventures into AI technology with music, utilizing prompt-based music generation as a starting point and an AI-driven robotic arm that brings songs to completion on analog audio engineering consoles for a final master.

This collaboration with robot brains paved the way for a groundbreaking album that seamlessly blends my multi-decade experience with digital and analog technology. By utilizing AI-created music stems, we ventured into the future of music production (by way of technological tools previously unavailable) while simultaneously paying homage to the past (as we stand on thousands of years of technological advances and musical theory that has made the art of today possible). Our creative process involved incorporating human touches, such as poem recitation of writings from the 1800s, choirs and newscast samples from the 1990s, and even sonification noises from the Black Hole in the Perseus galaxy cluster, courtesy of NASA’s copyright-free database. The resulting album goes on a genre-bending adventure, from electronic and R&B to digital pop and punk rock, mostly following the tastes of the humans involved.

Here’s how it all worked:

I provided prompts to the AI, guiding its “jamming” process, which yielded many initial song ideas to explore. 

The prompt submission worked as such: 

After selecting one or more moods, you proceed to choose which genre(s) you want the songs to have. The next step is choosing tempo and length requirements. You can also pre-select the instruments you want included in your songs. The song list will adapt to your input. 

Through careful selection, I identified the raw foundations of songs that harmonized with the overall project vision. With these chosen song structures, I then delved deeper, tweaking the provided instruments, adjusting tempos, and fine-tuning the jam sessions into robust musical backbones. 

The human touch: 

The song selections offered are divided into different instrument bundles: Melody, Backing, Bass, Drum and Fill, as shown below: 

You can individually change the “energy levels” of each bundle in each section. The colors will indicate the levels, with gray being muted, and a strong blue color representing a higher energy level.

Some blocks have gradients. These represent fade-ins and fade-outs, which are a volume automation of that section, allowing for cool build-ups or overall better transition effects. You can apply this effect by right-clicking on one of the block’s edges.

Finally, you have some additional settings which include beats per minute designations, instruments, volume level and even key. This means you are able to choose and change the specific key or set of instruments in each song. 

Once all of the tweaking was complete at this stage, true collaboration between humans and AI took place as we added layers upon layers of vocals, samples, and instrumentation to craft the final product. 

Conclusions: 

Generative music and sampling have been a part of music creation for over two decades, and indeed began with collage sounds made up of tape loops and electronic sound experiments as early as the 1940s. By the 1960s the BBC gave these techniques a spotlight by using technology on hand to create sounds and scores for BBC documentaries and TV programs. Today, laptops equipped with audio software, which would have seemed like science fiction in the early days of Hip Hop when DJs sampled records to create new songs, have become commonplace and allow for the same kinds of sampling to occur.

What sets the “New Oddities” experiment apart is that, unlike previous approaches to collage style songs that sampled copyrighted music—either illegally or through the payment of royalties to original composers—we utilized original creations by the Soundraw AI software. This approach brings a sense of liberation, particularly when considering the numerous videos and presentations that have been taken down from social media due to copyright infringement legalities or even the musical acts who for decades have been dragged to court for sampling, like Negativland, Chumbawamba, and Girl Talk, just to name a few! 

Generative music often refers to music derived from machines and systems that create sounds, a concept popularized in recent years by Brian Eno who primarily employed tape loops and their sounds in his generative compositions. Our experiment differed from this type of generative music as I utilized prompts to guide the music software and receive more of what I desired. However, one notable similarity with Eno’s approach is the requirement that to call something generative music it must continuously evolve and avoid repetition, lasting indefinitely. With the Soundraw software, it is indeed possible to create an endless variety of music, ensuring each tune stands as a unique creation. So unique in fact, the developers boast that the songs will never trigger Youtube copyright strikes.

Did utilizing AI in this way take these roles away from humans who could have offered the same if not better output while being paid a living wage to do so? Possibly, yes. But the implication is: if your project has a budget enough for living, breathing, human musicians, then that is the way to go. But if you lack the connections or the budget to make that happen, this tool could be a way to create either demos of songs for human collaborators to get a sense of where the final recording will go or as original samples utilized in the creation of new compositions as we did with “New Oddities.” 

We must remember that specialists have been replaced without AI for decades. Blue collar workers have been dealing with this fear since the industrial revolution. 

The fear of automation has also permeated various industries in our recent past, and artists are among the first to feel these effects with AI, witnessing a potential loss of livelihood. Today, software tools have empowered individuals to build websites, create high-quality beats, capture exceptional photos and videos, and more. This trend has been evident since the advent of the printing press, which rendered the role of scribes, who meticulously copied books by pen and ink, obsolete. Or the invention of the digital camera which makes gelatin-based film “unnecessary.”

Artist Scott Christian Sava says about his art, “My art is a mosaic, an amalgamation of the art and artists that inspired me on my journey to become the artist I am today.”  This sentiment resonates across all artist tools as well as all art forms, including music, performance, sculpture, painting, illustration, and beyond. As artists, we stand on the shoulders of those who came before us, and integrating AI into our creative processes is simply an extension of this tradition.

These are ideas we must address and tackle head on as a species, but this is not something that our specific experiment focused on. We simply sought to understand what is achievable with the tools of today. 

Artists, instructors, and truly the general public will need to adapt to AI. In my view, if AI is utilized as a tool (in the way Procreate on a tablet can mimic a blank canvas and paints, or how Illustrator on a computer can take the place of a drafting table for typography) rather than a be all, end all, then we will be in a good place. 

As we delve deeper into these frontiers, we anticipate that our efforts with “New Oddities” will fuel ongoing discussions surrounding AI’s integration into the artistic process. Since AI alone cannot generate true innovation, it will come to be seen as one element within a larger set of tools that human creators can employ to foster creative output. By embracing AI as a valuable tool, we can forge new paths and push the boundaries of creativity, ultimately enriching the arts, design, and media fields. 

Want more? Check out the Arts Calling Podcast where we look at each song!

Shahab Zargari

Shahab is a filmmaker, father and a huge geek.

View all posts