Comment

Should Musicians Embrace AI in their Songwriting?

By April 12, 2023

Music. Manchester.

AI Generated Image from PicsArt.

To many, artificial intelligence seems alien. It’s clearly ingraining itself into our culture, having the potential to filter out harmful online content on social media, revolutionise the agricultural industry with machinery that can predict weather patterns and run smart devices to carry out domestic chores with minimal human interaction. Its transition into the arts is at a benign stage: streaming platforms are being governed by algorithms and, slowly by slowly, it’s trickling into the songwriting process. Musicians tend to occupy two camps: the majority tend to roll their eyes when it’s mentioned, citing its impracticalities and how much it’ll suppress the individuality of artists, however projects that have championed it have unearthed a new wave of ingenuity. 

Technology has been tightly knit to songwriting over the last century whether that’s Bill Putnam pioneering delay and reverb and standardising stereo recording in post-war Chicago or the inception of digital audio workstations in the 90s that saw the emergence of bedroom producers. However AI isn’t anywhere the music history books. Sure, it’s still in its infancy but the available examples of what it can do leave a lot to be desired: lyric writing aids tend to be lifeless and songs manifested on apps like Amadeus and AIVA are tediously basic. These apps might be the ones with more enticing USPs – imagine writing a number one with the tap of a spacebar – but they are also the ones with paper thin longevities. Artificial intelligence has already made a mark with highly regarded audio technology company iZotope who have introduced ‘machine learning algorithm[s]’ able to suggest adaptive mixing advice. Though it’s far from perfection in its current phase, a major name embracing AI and finding a functional use for it without it having a stranglehold on artistry is reassuring.

At the core of this, the golden scenario should be a marriage of artificial intelligence spurring on ideas and generating unfamiliar sound worlds with a person’s touch and decision-making. Though it’s not accessible to all musicians. The more technologically inclined have constructed whole computers to assist their creativity.

Holly Herndon is a forerunner given the critical praise for her 2019 album ‘PROTO’ – a project as disconcerting as it was astonishing for the lurid character she cranked out of an inanimate machine called ‘Spawn’, co-created with her partner Matthew Dryhurst. As amazing as the album is, not everyone has a PhD from Stanford or a mind wild enough to birth musical intuition from what looks like a deconstructed computer case. It mimics the exclusivity of the Fairlight: the infamous trailblazer of sampling that paved the way for digital synthesis that would cost you tens of thousands but make your music sound like Kate Bush’s (perhaps Spawn’s hardware will raise eyebrows at Arturia and Novation). If this innovation can inspire sound designers and instrument developers it could indirectly shift the underlying sonics of today’s songwriting and production.

The obvious drawback from all of this is if technology can output millions of songs, why would companies hire humans any more? Right now, at least, AI can’t convincingly replace humans. There have been dozens of videos titles “I made an AI write a [insert band name here] song” scattered across the internet and none of them have been uncanny enough to satisfy hardcore musos. It can’t resurrect Kurt Cobain or Jimmy Hendrix’s guitars and, even if it were able to, programming such a thing would stain hallowed legacies. On the flip side, generating virtual celebrities is an intimidating, feasibly possible and extreme consequence.

Take the rise of Miquela: a fully animated influencer who has been given coverage that up-and-coming artists could only dream of getting. Millions of followers on TikTok and Instagram, ‘modelling’ for Calvin Klein and Prada, being pictured with the likes of Millie Bobby Brown, Rosalia and Bella Hadid and interviewing J Balvin, JPEGMAFIA and King Princess at Coachella. It makes sense from a fashion perspective – in essence brands only have to liaise with a graphic designer instead of an entourage of makeup artists, hairstylists etc. As sickeningly disposable as that makes the current personnel in the industry, a CGI character can’t make ten thousand people connect to ninety minutes of music. Even Gorillaz, a beloved collective of alterego-ed musicians, need to have a physical presence on stage to realise their vision.

It should never be the case that gigs are re-purposed as stand-up cinema experiences. The day that my eyes are fixed on a screen instead of a singer is the one when we can all call it a day. Companies are interested in the efficiency and cost. There surely isn’t any reason to employ a composer for an advert if you can vaguely elicit similar emotions with thirty seconds of computer generated noise, right? Why would you give young songwriters experience, career momentum and a minuscule – but very well appreciated – fee (in the grand scheme of things) when there’s a safer, even cheaper solution?

Although the current AI narrative leans towards negativity, there’s lots to like. At face value, lacklustre ambient tracks and novelty posthumous recreations don’t give much hope, but the software that can be developed has every chance of winning over unconvinced producers. Seen as it has positioned itself as one of the defining buzzwords of this decade musicians can’t bury their heads in the sand and ignore its existence. Rather they should understand how it can broaden their own creativity and pray that higher ups won’t plunge us into an entertainment dystopia where humanity no longer has a role – though this is a drastic situation and one that, you’d imagine, wouldn’t be appreciated by its audiences.

A new realm of sound processing and design will perk up producer’s ears. So too will realtime mixing assistance (when the software is perfected) and the art of lyric writing won’t be binned by soulless drivel from ChatGPT and similar word generators. Being able to wholly digitalise composition doesn’t mean an overhaul is an inevitability. That would be a wholly vacuous future of factory-line music making that no-one with an ounce of common sense would green light.

***

Listen to the playlist specially made for this article:

Comments

comments