Tensions are high out there for workers who fear being replaced by AI. As we’ve previously discussed, multiple industries are grappling with how to navigate this new technology in a tale that is ultimately as old as time. Our own Michael VanDervort recently wrote about how transparency on the subject can help put workers at ease and ultimately help companies avoid third-party infiltration. After all, unions are keen to pounce upon any worker concern while promising they can vanquish a perceived threat.
So, it’s no wonder that a primary focus of this summer’s historic double-strike of WGA and SAG-AFTRA revolves around the fear that AI could replace writers and actors. As with all things involving California, this outcome could portend how workers, companies, and unions will continue to navigate the issue elsewhere and in other industries.
SAG-AFTRA remains on strike, but WGA has ended their 148-day walkout against the Alliance of Motion Picture and Television Producers. It’s an outcome that might have gone differently had SAG-AFTRA not joined the “party” in mid-July. The combined pressure of a near-total work stoppage led studios and streaming networks to send their CEOs into several days of intense final negotiations with the WGA bargaining committee.
In the end, a tentative contract surfaced, and writers are now back to work. Will all be well for the workers? As Nancy Jowsky recently detailed, the UPS/Teamsters agreement wasn't as much of a union win as it was sold to be. Is the same true for the WGA/AMPTP agreement on the subject of AI?
The answer is a complicated one. The full 94-page Memorandum Of Agreement covers minimum staffing, pay rates, residuals, and several other issues, but again, we are zeroing in on the AI portion. Let’s pop into a key part of the Summary of the MOA, which ostensibly is what’s being used to persuade WGA members to vote for this agreement:
“AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the MBA, meaning that AI-generated material can’t be used to undermine a writer’s credit or separated rights.
“A writer can choose to use AI when performing writing services if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services.
“The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.
“The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.”
On the surface, this reads like a significant union victory, and yes, this is more of an immediate win (emphasis on the “immediate”) than a loss for writers, but it’s nowhere near a long-term slam dunk.
In the WGA’s full MOA, Article 72 (on Generative Artificial Intelligence) is a windier version of what we quoted in Part I. The full version solidifies how AI can be used by studios and writers with limitations upon scope. What’s interesting, however, is that even though the results do benefit writers, this outcome isn’t necessarily a union win or a result of anything that the union achieved in negotiations.
Also, the language of this section goes as far to protect studios as it does writers, although neither side attempted an “outright ban” on AI. And to be fair, that does not seem like an achievable goal.
Let’s start with two realities: (1) It is in studios’ best interests to retain as much ownership of IP as possible; (2) It is in writers’ best interests to be paid fairly and not be replaced by AI. Both sides have gotten what they want out of Article 72, the AI section, which is essentially one giant compromise, because humans must be involved in creating “literary material” to the point where it cannot be considered GAI.
Why? Because the U.S. Copyright Office does not consider GAI content to be copyrightable. Yet, who’s to say that the USCO won’t change its mind before this contract expires in May 2026? For now, studios want to ensure that humans are tied to scripts in a substantial enough way that they remain copyrightable.
We aren’t quoting the full section here to save some space, but here’s
what Article 72 actually does for both sides:
AI can be used to crank out first drafts of scripts as long as a human rewrites the material into a second draft. A human writer must be credited as first writer, which is also currently necessary for studios to copyright the material;
This agreement also guarantees that screenwriters will not lose their compensation on a project due to the initial use of AI. So, the writer must be fully informed of the situation, receive their usual pay rate, and retain said first credit. Those aspects are a win for writers who feared that the opposite would happen – that their own work would be retooled by AI and that these writers would be paid a lesser rate and lose their credit, or that human writers would not be needed at all.
Both sides are likely satisfied with the current outcome, but this is not an all-clear for either side. Specific language in Section F states, “The parties acknowledge that the legal landscape around the use of GAI is uncertain and rapidly developing….” In Section G, “Each Company agrees to meet with the Guild during the term of this Agreement at least semi-annually at the request of the Guild… to discuss and review information related to the Company’s use and intended use of GAI...”
In other words, the subject of AI will be an ongoing discussion in this industry and countless others, and not everything has been settled here.
Stay tuned.