Last week Sir Paul McCartney spoke about how Peter Jackson brought back John Lennon’s voice, extricating it from an old cassette recording, whilst making the film “Get Back”. Now, McCartney plans to release what he says will be the Beatles’ last record, using Lennon’s voice in the final mix.
McCartney rehearsed the pros and cons of what we might now term ‘Resurrection AI’ on BBC’s Radio 4, but seemed fairly laidback about the implications, suggesting “we’ll just have to see where it all leads”. But we already have a few clues as to where this might take us.
Typically AI is used in this manner to bring back someone from the past who has died, in order to put words in their mouth or to support a social purpose. I am reminded of the example from The Times, in which engineers managed to use samples of President John F. Kennedy’s voice to construct a realistic delivery of the speech he would have made in Dallas if he had not been assassinated. The effect is spine-tingling.
Or, the example of the Mexican journalist who was brought back using AI after he was assassinated. The authorities never found his killers so he was resurrected with a message urging people to help find them. And of course, there is the audio fake of Anthony Bourdain which was used to promote a documentary movie. In all three cases, the resurrection was used to voice something that these people never actually said.
This suggests that a new industry around post-mortem rights is about to spring up to protect people against the commercialization of a resurrected image. I spoke to Sam Gregory, executive director at human rights organisation, Witness. He said, “At a moment like this we really need to place a lot of premium on the questions of consent and disclosure and what are legitimate usages in certain contexts.”
But of course, it might work the other way, we might find in the future that it is just as easy to use AI to erase people from content. If a band has a falling out with one band member, who they dismiss for perhaps having the wrong opinions on social issues, how likely will it be that the band employs AI to remove that person from any promotional materials of the past (given that most are now digital) or erase them entirely from a recording?
In fact, Sam points to all kinds of AI-enabled image uses to depict an imagined future, “We just had in the U.S. an ad that was an imagined future. It was the launch of the Republican National Committee response to President Biden’s 2024 launch campaign, in which they created a fake ad of an imagined future where he wins again. There are big questions about when it is legitimate to use someone’s image for an imagined future.”
A growing number of questions are arising about how we use images and audio of people who do exist, or people who no longer exist, as we continue to employ AI to blend together experiences of past, present and future.
And it’s going to be a complicated business to create rights because, and there is no denying this, we all use AI to project our own image on a daily basis, “90 percent of it is going to be synthetic by 2026, that is often quoted …but how helpful is that?” asked Sam. “It’s dependent upon whether it’s the 90 percent we care about, or it’s the 90 percent that we want to be synthetic like my Tik Tok feed. I’m kind of expecting and enjoying that.”
We certainly need to be doing something about the human rights that could be infringed using resurrection AI but how to legislate for it is far from clear. In the future, we might actually want 90 percent of media to be synthetic, like a new Beatles record. When all the members of the band have passed, it is because these brand new music releases are 90 percent unreal that they are, in that moment, 100 percent enjoyable.
You can listen to a full length version of the conversation with Sam Gregory about generative AI and human rights on The Future of You podcast when it airs on 22nd June.
Read the full article here