On Monday, the leadership of the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA, held a members-only webinar to discuss contract The union tentatively agreed last week with the Alliance of Motion Picture and Television Producers (AMPTP). If ratified, the contract will officially end the longest labor strike in the union’s history.
For many in the industry, artificial intelligence was one of the most controversial and frightening components of the strike. Over the weekend, SAG released details of its AI Agreed Terms, a broad set of protections that require consent and compensation for all actors, regardless of their status. With this agreement, SAG has gone substantially further than the Directors Guild of America (DGA) or the Writers Guild of America (WGA), which preceded them in reach an agreement with the AMPTP. This is not to say that SAG has succeeded where other unions have failed, but rather that actors face a more immediate existential threat from advances in machine learning and other computer-generated technologies.
The SAG agreement is similar to the DGA and WGA agreements in that it requires protections for any cases in which machine learning tools are used to manipulate or exploit your work. All three unions have claimed that their agreements on AI are “historic” and “protective,” but whether one agrees with that or not, these agreements serve as important milestones. AI doesn’t just pose a threat to writers and actors: it has ramifications for workers in all fields, creative or not.
For those who look to Hollywood’s labor fights as a model for how to deal with AI in their own disputes, it’s important that these agreements have adequate protections, which is why I understand those who have questioned them or pressured them to be stricter. I am among them. But there is a point where we are pushing for things that cannot be achieved in this round of negotiations and that may not need to be pushed at all.
To better understand what the public generally calls AI and its perception of threat, I spent months during the strike meeting with many of the leading engineers and technology experts in machine learning and legal scholars in both big tech and copyright law. .
The essence of what I learned confirmed three key points: the first is that the most serious threats are not what we hear about most in the news; Most of the people who will be negatively affected by machine learning tools are not the privileged but the low-income. and working class workers and marginalized and minority groups, due to the biases inherent in the technology. The second point is that studios are just as threatened by the rise and unregulated power of Big Tech as the creative workforce, something I wrote about in detail earlier in the strike. here and which WIRED’s Angela Watercutter astutely expanded here.