As legal requirements and ever more diverse audiences demand multi-language captions, localization, and ASL support in their live and on-demand streaming content, how can content developers and producers remain on top of both regulatory requirements and the technical demands of provisioning their content for maximum accessibility? Allan McLennan, President/CGA, 2G Digital Optimizations, discusses this topic with Corey Behnke, Producer & Co-Founder, LiveX, in this clip from their panel at Streaming Media East 2023.
McLennan begins by providing an overview of the new language provisions laws that are cropping up for content and how major steamers are now working to ensure that content is available in as many languages as possible for international markets. “The new UK media law means that every form of content produced and distributed in the UK has to be provisioned into the language of the viewer, which means every piece of content has to be localized,” he says. “This is also starting to translate over into the Netherlands [and] it’s starting to be looked at in Germany.” He says that when Disney+ was launched, “70% of their overall content costs were based in dubbing because they wanted to reach the audience in and of itself to be able to capture new revenue models that are there.”
McLennan wonders how these language provisions work with live-streaming content. He asks Behnke, “How many languages do you look at when you’re provisioning your content out?”
“A hundred percent,” Behnke says. “So politics in the US government are probably the biggest area where you have the most captioning, ASL, and multi-language. We just did the grassroots campaign for the President two weeks ago, and we had an ASL feed, an English feed, a Spanish feed, and then captions for all of those.” He notes the challenges of different platforms handling these multiple fees and the overall scalability issues. “Even YouTube can’t really handle one player having all those things,” he says. “So you end up not just having multiple distribution channels, but within a platform. So even if you look at how we did DNC 2020, you’re going out with one night’s feed, and that feed might have three ASLs, three languages, and each of those are different streams. So now you have the scale of having to do the VOD trims for all of those and making sure that captions are clean and those kinds of things. It definitely keeps us working.”
McLennan asks, “Do you refresh it? Do you update it? Even after it’s gone into on-demand, you’ve got it all set?”
“Typically, we’re trimming because we did a good job,” Behnke says. “But when…something happens in the program, you’ve got to go back. You’ve got to make sure it’s clean.”
“Do you ever add any types of new forms of interactive capabilities to the content to refresh it?” McLennan asks. “Is it just done, or [are] there times that you’ll take it and update it or allow it to be in a position [where] new viewers could come in and take a look? You’re talking about the DNC, and as we move forward in this next 12 months or so, it will be a high-demand activity.”
“I’d love to see where you can keep certain ancillary data like that,” Behnke says. “Ancillary data inside of a live stream is only going to increase exponentially. Over time, my hope is that players can take more of that ancillary data and then have a better way for a viewer to be able to choose what the experience is so that all of your content lives in one player as opposed to now, where it’s distributed a lot of places.” He highlights the ways that people are currently working around these challenges, and he uses Twitter as an example. “A lot of people solve their problem [by] literally [having] a Spanish Twitter account for their political organization. Or ASL, right? There are three different American Sign Languages. How many organizations are actually making content for all three?”
Behnke further describes another form of content assistance, such as audio description. “It’s for people who are blind,” he says. “It’s literally commentators that are describing what’s happening in the feed. And it’s brilliant. Actually, if somebody has an audio description version of the content, I would rather watch that a lot of times because it’s ongoing commentary.” However, he notes that only certain major tech companies and sectors are fully engaged with accessibility. “In our country, it’s only the Googles of the world, it’s only the government, it’s only political and education that really focus on making streams more accessible.”
McLennan says, “You’re saying [that] right now, within the live broadcast, that component is done live, or is it a tool?”
Behnke says that it is done live. “What was kind of cool is for the DNC, we use this product called Cleanfeed, which is just a very low latency audio feed,” he says. “And we basically use that to bring commentators back, so they’d be in their house. So instead of an American Sign Language interpreter, you literally have two people who are, instead of sports commentators, they’re just [doing] audio description. And then we’d have an audio description channel on YouTube for people that are blind…they can come to that channel and know what’s happening.”
Learn more about a wide range of streaming industry topics at Streaming Media Connect 2023.
Ai-Media’s Matthew Mello talks with Tim Siglin about the evolution of AI captioning in this exclusive interview from Streaming Media East 2023.
21 Jun 2023
LiveX’s Corey Behnke discusses the need for accuracy in captioning that goes beyond what AI can do, particularly with the increased demand for accessibility that has accelerated during the pandemic, in this clip from Streaming Media East Connect 2021.
02 Aug 2021
FCC Chief, Disability Rights Office, Suzy Rosen Singleton breaks down FCC Captioning Requirements as they apply to streaming content in this clip from her presentation at Streaming Media West Connect 2020.
02 Nov 2020