How captions in TikTok and videos are reshaping internet culture and how we literally talk about disability in technology

I was about 12 years old when, in July 1993, the Federal Communications Commission (FCC) mandated television manufacturers to build closed-caption support in every set larger than 13 inches. This directive was a result of the Television Breaking Circuit Law passed three years earlier. The change was an important one in terms of accessible media, as the caption required pre-dedicated equipment to operate. Since my parents were completely deaf, our living room TV was surrounded by a VCR-like device called a set-top box that transmitted the captions to the TV so they could understand the dialogue. Set-top boxes are manufactured by Sanyo and marketed by the non-profit National Captioning Institute.

I have vivid memories of this box sitting on our TV stand for what it felt like eons ago. Looking at the masterpieces of modern technology, it feels as old nowadays as rotary dial phones, telephone booths, pagers, cassette tapes, answering machines – all of which I remember well, too. Suffice to say, these memories are good indicators of my age.

In a story written by Brian Contreras for Los Angeles Times Published last September, Paula Wenk, a professor of linguistics at Michigan State University who has studied the educational benefits of captions, eloquently described the annotations as “glasses for your ears.” Contreras’ story focuses on TikTok for good reason – it’s the largest social network and the company values ​​its commitment to accessibility – but captions are truly ubiquitous in the digital world. In fact, the company often engages disabled creators; The focus on captions was not a coincidence. From TikTok to Instagram to Snapchat to FaceTime to Zoom and more, tech companies big and small are investing in making their products more inclusive through their accessibility. Since people with disabilities make up the largest minority group in the world, it makes sense besides being the right thing to do. After all, the culture of disability He is It is very much an integral part of Internet culture. People with disabilities use the Internet, and the tools we access should reflect this by being as accessible as possible.

The attention that TikTok (and others) is paying to the disability community is heartening. Accessibility is something that software development teams everywhere have to absorb and normalize, not in a hurry to “install” it after the fact just to satisfy some government regulation. The dynamism of remote work brought about by the pandemic has enabled significant progress, but the fight for digital equality continues.

In addition to the obvious ramifications of accessibility, captions affect deaf and hard of hearing populations, and captions are also useful for their bimodal sensory input. It can be helpful for many – myself included – to hear the voice spoken And the Read the text that appears on the screen. A double dose of sensory information enhances what’s happening on multiple levels: not only do you hear what’s being said, but you see it. You don’t need to be hard of hearing to appreciate it.

“There is a growing expectation on the part of users that captions or transcripts will be available to them in all types of online content,” Heather Ponikovsky, a glossary at, told me in a recent email interview. This “increased awareness”, as I called it, with the annotations shows a heightened awareness of accessibility and assistive technology; People are increasingly realizing that ‘how [captions] They may be better suited to specific environments or content,” Bonikowski said.

Bonikowski explained that the team at keeps track of the way people consume content nowadays, most notably on TikTok and YouTube. Many people watch videos on these platforms with the audio turned off, which requires some sort of adaptation to understand what’s playing. Thus, captions are an important accessibility feature when watching videos online.

But it’s not just the comments that are valuable. Since is the dictionary, it is entirely concerned with the . extension language of accessibility and disability. This is changing rapidly, Bonikowski tells me. She and her team have spent a lot of time updating many of the words (eg, “alt text” and “screen reader”) in the dictionary so that their definitions reflect the community’s growing awareness of accessibility and assistive technologies. She cited closed captions as an example. “A lot of people have this CC logo from the TV printed as their idea of ​​captions, but ‘closed caption’ is a specific type of caption that should be enabled in the system menu of a DVR or TV receiver,” said Bonikowski. She added that open comments are captions that are embedded in the source video and cannot be adjusted. This type of caption is “more common in online video content,” she said.

Bonikowski echoed the sentiment I’ve expressed many times in this column, that accessibility features literally benefit Everyone—Not just people with disabilities. She believes an increasing number of people are realizing this, in large part due to what she calls the “democratization of [online] Create content.” Anyone can be an Internet creator, and as such, a diverse group of people requires sensitivity to diverse needs and technology tolerance.

“When major publishers and studios produced the majority of books and classroom supporting materials, television shows, films, and other content, the captions were part of a specialized published feature, produced by professional commentators,” said Bonikowski. “Today, however, educators, TikTokers, Streamers, YouTubers, and others are creating original content, and making that content accessible means that many creators are self-taught.”

At a high level, Bonikowski and his team’s interest in language and its meaning, particularly in relation to the disability community, is not only signs of increased awareness and empathy. It is an indication that language is evolving. An example of this is the question of using “disabled person” instead of “disabled person”. Choosing one over the other says a lot about modern societal sensitivities.

We use our usage notes to document the discussion in person first [versus] First language preferences for identity within the disability community,” said Bonikowski. “This kind of difference is normal within the language community. [Our] The recommendation is always to follow the preferences and language designed by the speaker in question.”

Another example mentioned by Bonikowski is the difference between captions and subtitles. Whereas previously, they had separate definitions—they were dialogue captions, subtitles for language—the terms became more interchangeable as time waned. I mentioned that the discussion took place online after people wondered what is the best way to watch Netflix squid game String: Either translation from Korean to English or captions from the dubbed text. Bonikowski said that this type of language conversation isn’t necessarily experimental for people with disabilities, but rather “the growing awareness among all users and viewers who consume this content that has been previously thought more rigorously in terms of accessibility.”

Language is ultimately about communication. Bonikowski tells me that’s monitoring of language trends is critical work—for them as an organization, and more importantly, for a community’s ability to express itself effectively and accurately when communicating.

“[We] It strives to be the source through which people can inform themselves of these sophisticated and linguistic nuances so that users can make choices that convey their messages accurately and completely.”