One evening this summer, I got to watch firsthand as Joan Tait’s life unraveled. I looked on as she lost her fiancée and then her job, committed an outrageous act in a church, and got arrested and later convicted, all because of, as Joan’s attorney described it, “some super-advanced, deep-fake quantum computer mumbo-jumbo.”
All this I witnessed from my couch, with a beverage and some snacks, viewing an episode of the Netflix science fiction show Black Mirror called “Joan is Awful.” Joan isn’t a real person but rather the main character — and one of the chief victims — in a disturbing portrayal of a too-close-for-comfort near future where the power of artificial intelligence, wielded by the wicked, sociopathic head of a seemingly omniscient media corporation called Streamberry, wrecks lives effortlessly and without conscience.
I’d wanted to watch “Joan is Awful” in large part because I’d heard it confronts issues that have lately taken on great urgency, not only in my business, which resides at the intersection of digital technology and media + entertainment, but in all the industries and organizations that AI touches, which nowadays is most.
Black Mirror, I knew, could be provocative. After seeing this episode, I was indeed provoked, or at least motivated, to engage in the dialogue about the ethical considerations surrounding AI in the media + entertainment business of which I’m part. I’d watched “Joan is Awful” against the backdrop of two labor disputes, one involving the Writers Guild of America (WGA) and the other SAG-AFTRA (the Screen Actors Guild – American Federation of Television and Radio Artists). As I write this, both unions are striking as they demand new guidelines and assurances from their employers around, among other issues, the use of AI in a range of applications that impact how content is produced, how likenesses are used in that content, and how compensation should work when companies use AI-generated likenesses instead of the real thing.
Generative AI can profoundly change the media + entertainment industry, and although “Joan is Awful” is a work of fiction, the potentialities it suggests underscore the urgent need for stakeholders across our industry to collaboratively develop a set of ethical standards to govern how it’s applied. If AI itself has no ethical awareness, then it’s up to the entities and people behind that technology to ensure it is ethically applied in all instances. Without widely accepted and enforceable checks and balances that define the rules of engagement, and what’s acceptable and what isn’t in terms of outcomes with AI, there’s a real risk we’ll lose control of the technology.
The high-profile strikes by the writers and screen actors unions have brought more mainstream scrutiny to the industry’s AI ethics debate, which is being driven by the likes of the Joint Task Force for Artificial Intelligence in Media, a group formed by the Entertainment Technology Center (ETC) at USC and the Society of Motion Pictures and Television Engineers that has identified ethics as a high-priority issue. While I don’t pretend to be an expert on ethics, I can at least draw from my perspective at the confluence of technology and the business of media + entertainment to frame the issues and questions involved in creating a common, industry-wide set of AI standards.
First, let’s discuss what’s at stake. AI has the potential to profoundly change the entire media + entertainment value chain in ways we have yet to fully realize. It comes with costs and risks, including ethical risks, that we don’t yet fully fathom, but nevertheless must address immediately. In the hands of a bad actor, generative AI could presumably be used in dangerous and damaging ways to intrude on privacy, replace humans, and trample on their rights and ability to earn a living. But it also could be used as a positive and productive force across the value chain, furthering and supporting the human creative process and helping people work more efficiently. Ethical standards would help to reinforce the positive while preventing the negative. The outsized role media + entertainment play as a cultural and societal trendsetter, mirror, and bellwether also means the industry has an obligation to the public and itself to define and uphold certain ethical standards.
What might those ethical standards look like? While I’ll leave people eminently more qualified than myself to decide, one thing they should be is industry-specific to explicitly address the potentialities related to AI’s application in production, post-production, marketing, distribution, and beyond, all the way to direct-to-consumer streaming services. They should include data governance rules. They should establish clear rights in order to avoid situations that even remotely resemble the privacy and likeness violations that victimize characters in “Joan is Awful.”
AI ethical standards also should require transparency in the models and data used in specific AI applications. At one point in the show, Maria Javadi, head of the media company that brings Joan’s life to the masses, says the company “barely knows how [its quantum computer-driven AI] works.” To build trust, everything about the AI algorithm and how it works needs to be clear and explainable.
Whatever shape these ethical standards ultimately take, they should express an industry-wide set of values around privacy, rights, and more and detail a system for regular auditing of data, models, and systems. They should be revisited often and updated to keep pace with AI technology development.
As rapidly as that technology is advancing, now is the time for media + entertainment stakeholders across the development, technical, marketing, and talent sides of the industry to come to the table to collaboratively develop AI ethical standards. This is an effort that, to succeed, must involve the entire value chain. Getting them all to agree on an effective, enforceable set of standards won’t be easy. As far as who enforces those standards, there likely will need to be some type of independent agency to govern the standards themselves, and to oversee reporting, audits, compliance, and enforcement.
In the Black Mirror episode, the “real” Joan isn’t nearly the awful person the AI version of her is portrayed as being. Likewise, AI technology needn’t turn out to be the sinister tool that it becomes in the show. Now it’s up to stakeholders across the media + entertainment industry to take action to ensure AI is applied without doing harm, and that it becomes a sustainably positive force across the value chain rather than a tool people seek to destroy, as Joan did, for the wickedness it enables.
Leon Huang is senior director of Sports, Media, and entertainment at SAP, and has held various sales & marketing positions at Electronic Arts and News Corp (STAR TV) prior to joining SAP.
[Editor’s note: This is a contributed article from SAP. Streaming Media accepts vendor bylines based solely on their value to our readers.]
Generative AI is a game-changer for all sorts of businesses, and how to leverage it is a key strategic and technical concern for a range of streaming organizations looking to monetize their content and operations. Darcy Lorincz of Barrett-Jackson Auction Company, C.J. Leonard of Mad Leo Consulting, and Reality Software’s Nadine Krefetz explore generative AI’s current and imminent impact in this clip from their panel at Streaming Media Connect 2023.
29 Aug 2023
Somehow, greenlighting “Joan Is Awful” has made Netflix look oddly actor strike-sympatico, via its winking endorsement of a show that warns against a writer-less, actor-less “profits without people” media and entertainment future from whose realization they stand to benefit.
24 Jul 2023
Amnon Cohen Tidhar of Cloudinary discusses the new benefits, complexities, challenges, and risks of Generative AI.
26 Jun 2023
Nubyra Ahmed of Cint highlights the benefits of using AI in CTV, such as reducing fraud, better audience targeting, enhanced ad tech, and more.
06 Apr 2023
Josh Dorward of Cloudinary discusses the ways that AI is helping marketers embrace top video trends.
06 Dec 2022