In the recent, almost surreal turn of events, the Beatles, a band that etched its name in the annals of musical history, found a posthumous revival. This was not through tribute bands or covers but through the marvels of AI. A little-known song titled “Now and Then,” recorded by John Lennon shortly before his untimely demise in 1980, was given new life. Originally, the track lay dormant, the quality of the recording muffled and deemed too challenging to refine. It was a rough gem, Lennon's voice lost in a low-quality audio mix recorded on a simple boom box.
Enter filmmaker Peter Jackson, whose AI tool performed an audio miracle. It separated and clarified Lennon's voice, cutting through the static of time and technology. This breakthrough allowed the remaining Beatles – McCartney with his piano, slide guitar, and bass, and Ringo Starr with his drums – to add their parts, completing the song. The result was nothing short of a digital resurrection. “Now and Then” was officially released on the 2nd of November, quickly amassing nearly 30 million views on YouTube. Its vinyl incarnation became the fastest-selling single of the year in the UK, with over 19,400 copies sold. If you gave it a listen, you would probably think that John Lennon did come alive again with how clear his voice was.
But the Beatles’ story is just the beginning of this AI-fueled journey into the past. Warner Music has set its sights even higher, announcing plans to use AI to recreate the voice of the legendary French singer Edith Piaf for an upcoming biopic titled “Edith.” This ambitious project aims to narrate her story in her voice, digitally reconstructed. Piaf’s vocal essence will be captured using hundreds of voice and image clips, aiming to authentically replicate her unique style and vocals. This collaboration between Warner Music and Piaf’s estate is a groundbreaking endeavor, though it's still in development without a concrete release date.
What we’re seeing here is the usage of AI to revive the dead – at least the voices of the dead. But what could be next?
First Creative Completion Then Complete Digital Resurrection?
Ever been utterly engrossed in a book series, only to discover the author left this mortal coil before wrapping it up? It's like being left hanging on the last rung of a narrative ladder. But here's a twist: soon, generative AI might swoop in to save us from this literary limbo. Imagine AI as a sort of digital literary executor, finishing the works that writers left dangling in mid-air.
Currently, generative AI is great at juggling words and dabbling in images. So, it's not a stretch to think we'll soon see new AI models, or perhaps spruced-up versions of current ones, stepping into the shoes of authors and artists who checked out before the final curtain. Take Charles Dickens, for example. You might know him for "David Copperfield" or "Oliver Twist," but he left us hanging with “The Mystery of Edwin Drood,” half-finished at six out of twelve installments. And he didn't leave any breadcrumbs to follow for the rest of the story. It's like a literary cliffhanger without a resolution. Now, what if AI, after a thorough Charles Dickens boot camp, took a stab at finishing it?
But why stop at literature? The art world is littered with unfinished masterpieces too. Take Leonardo da Vinci, the poster boy for artistic perfectionism. He left behind a trail of half-finished works, at least four that are pretty famous, and a bunch more that art buffs love to argue about. The question isn't whether AI can wear the hat of these past masters; it's more about how well it can match their style and substance.
As we follow this trend, it's not just about painting or writing. Imagine AI dabbling in architecture, design, and even medicine. The big question is, as AI gets better at mimicking human creativity, could we see a future where it replicates not just art or literature, but the very essence of a person? We're talking about a digital resurrection of sorts, where AI brings back more than just the works of the departed, but their digital personas too.
Bringing a Black Mirror Episode to Reality
If you have not watched Netflix’s dystopian series of short films exploring the dark side of tech called Black Mirror, now is the time. The episode I’m specifically referring to is Episode 1 of Season 2 called “Be Right Back”. It centers on a woman called Martha who, after losing her boyfriend Ash in a car accident, uses a ChatGPT-like interface to communicate with an AI simulation of him, trained on all his past social media data, photos, and videos. Eventually, she purchases a robot body to install this AI in. But here's the kicker: as the episode wraps up, Martha realizes that this AI Ash is about as good at being human as a pig is at being a frog. He couldn't quite capture those pesky human emotions and complexities.
Black Mirror nudges us to ponder a rather intriguing thought. As our developers keep tinkering with AI to bring back voices from vinyl graves and maybe soon, to finish novels and paint masterpieces, we're inching closer to a world where AI might just be able to create a clone of a person. Imagine that – a digital doppelganger, minus the soul.
And then there's this nugget to chew on: if AI gets so good that it can mimic the crème de la crème in every field, from singing like Sinatra to painting like Picasso, where does that leave us mere mortals? Our trump card in this high-stakes game might just be our ability to feel and imagine – human emotions and creativity.
Really enjoyed this thought provoking piece. I think as the AI revolution continues, my mindset towards AI has shifted from not what the possibilities are, but rather how fast will we get there. It's also interesting to consider that as AI progresses, the workers that are afraid of being replaced are now the white-collar workers when we used to think that blue-collared workers would be automated away.