This summer, the machines were let loose in Hollywood. The questions raised don’t just matter for show business, but for all of us
Whenever the world is all confusion and strife, I always think it pays to return to the great myths. So let me tell you the story of Space Jam: A New Legacy (2021). Basically, in this movie reboot that came out in July, the real-life basketball player LeBron James has to team up with Bugs Bunny, Daffy Duck and all the other Looney Toons to battle against a rogue artificial intelligence – Al-G Rhythm – who wants to control all the productions at a movie studio. The irony, as I wrote in a newspaper review at the time, is that A New Legacy is “so blandly crowd-pleasing that [it] might have been designed by… an algorithm”.
But there’s another layer of irony that I didn’t mention in my review: A New Legacy really might have had an algorithm involved in its design. At the beginning of last year, the studio that made it, Warner Bros., struck a partnership with a company called Cinelytic that has developed an “AI-driven project management system” to help with the greenlighting of movies. The idea is that Warners executives will be able to take some film pitch and feed it into the system, which then runs through its internal data fields to make predictions about how much the film might be worth in particular places, or with particular stars, or in particular release windows. It’s the kind of power that Al-G Rhythm always wanted.
When the Warners-Cinelytic deal was announced, it was, at once, extremely striking and somehow easy to ignore. A big studio using machines as part of its creative process is no small thing. Yet it also seemed like just the sort of move a big studio would pull in the 21st Century – a very modern way of achieving the same old goal of bigger profit margins.
But over the past few months, at least in the weird, cinephiliac corners of the internet where I hang out, the encroachment of AI into movies has become impossible to ignore – and not just because of Space Jam. There was the documentary about Anthony Bourdain that used AI to replicate the late chef and writer’s voice. There was a short film that was made to look like the work of Federico Fellini, after AI sifted through the Italian maestro’s filmography to identify its defining qualities. There was even that trailer for the Hugh Jackman silliness Reminiscence where deepfake tech puts your face – or your worst enemy’s – in the centre of the frame.
Two things stand out about these developments. The first is how here they are: from script selection to voiceover narration, artificial intelligence is already part of the filmmaking process in Hollywood. The second is that AI is accomplishing things that were – through different means and with patchier results – possible in a pre-AI world. The Bourdain voiceover might have been achieved with an impersonator. The Fellini movie could simply have been made by a proper student of the director’s work. Even the most far-out example, the deepfake, is not so different to the sort of early CGI work that, in 1994, placed Forrest Gump in the same room and archive footage as John F. Kennedy.
But this is almost the point of AI: it does the same jobs, only quicker, easier and with greater accuracy. And it does them so much quicker, so much easier and so much more accurately that eventually, as the technology progresses, it will seem like magic. In their new book AI 2041, imagining the world as it might be in 2041, today’s Tortoise ThinkIn guest Kai-Fu Lee and his co-author Chen Qiufan talk about this process as it relates to deepfakes:
“In the early days of deepfake technology, factors like Internet speed and exaggerated expressions could easily cause glitches, resulting in images that blurred, or out-of-sync lip movement. Even if the glitch lasted for only 0.05 seconds, the human brain, after millions of years of evolution, could sense something was amiss. By 2041, however, DeepMask – the successor of deepfake – had achieved a degree of image verisimilitude and synchronization that could fool the human eye.”
So imagine the perfection of what we are already seeing on our screens. Instead of the pretty convincing de-aging of Robert De Niro that was done for The Irishman (2019) – incidentally, a special effects innovation that involved a bit of AI – imagine something that is utterly convincing, a total recreation of the younger Bobby D. Instead of the weird, dead-eyed and floppy-lipped facsimile of the late Peter Cushing that was made for Rogue One: A Star Wars Story (2016), imagine something that is as close to the resurrection of Cushing as science will allow.
Or go wild. What about an actor made to look exactly like some other real-life figure for a biopic? What about a composite actor made from the most attractive parts of others? This is the cinematic future that AI promises to deliver.
And this is where we come crashing into the hurdle marked ethics. The release of the Bourdain documentary, Roadrunner, was instructive in this regard. Much like Cushing, Bourdain is no longer with us – he died in 2018. But the film’s director still wanted Bourdain to read out the words of an email that he had written while alive. So AI was drafted in to process hours of recordings and then to stand in as the chef’s voice. The resulting sound clip occupies a few seconds of the finished film, though you wouldn’t know its provenance if you didn’t know. There’s nothing to indicate that it’s not Bourdain himself.
The absence of signposting was one part of the controversy that grew around Roadrunner’s release. Another was the testimony of Bourdain’s widow, who didn’t exactly say that she had given her blessing for the technique to be used. But what about Bourdain himself? What did he think? That’s probably the most important question, but, strangely, it wouldn’t need to be asked were Bourdain still around to answer it.
The way the studios are trying to overcome these posthumous questions is by involving estates where they can. Peter Cushing’s digital appearance in Rogue One was okayed by the owners of his estate and later praised by his former secretary. But estates aren’t always perfect arbitrators. What if they’re strapped for cash, and put the bottom line ahead of their forebear’s artistic legacy? What if their pool of brain cells has shrunk over the generations? These questions aren’t moot; they could become fundamental to the process of moviemaking, particularly in an era of big screen franchises when actors – both alive and dead – are expected to appear in one movie after another.
None of this is to dismiss AI, nor what it could offer to filmmakers. After all, the movies are, almost uniquely, an art form founded on technological development. It started with the motion picture camera itself, and then it continued with colour and sound and special effects and 3D. At every stage, there have been questions and scraps. Do audiences really need to hear the actors yammering on? Is 3D an artistic tool or just a way of rinsing cinemagoers for more money?
But never before have the questions been so rooted in ethics and in law. In movies – and in everything – AI is potentially an exciting development, yet definitely a complicated one. We need to talk about Al-G Rhythm.
Photograph courtesy of Netflix