Demand for Subtitles and Captions Is Up. Here’s Why

These days, we can watch content nearly anywhere. We might catch up with a favourite TV series while on a long airline flight or watch a work seminar from a hotel room while on a business trip. As where and when we consume our content evolves, so does how we watch. One of the trends we have been noticing is the increased popularity of subtitles and captions. 

A recent survey found that over 50% of Americans use subtitles or captions “most of the time” when consuming content. Data from Netflix backs this up: they report that “more than 80% of members use subtitles or closed captions at least once a month.” 

This inclination is especially pronounced among Gen Z. In the UK, a 2021 Sapio Research study found that 80% of viewers between the ages of 18 and 25 use subtitles all or part of the time.

But there is no one reason why subtitles and captions have seen such an uptick in popularity. Instead, there are many contributing factors:

  • An increased focus on accessibility for d/Deaf and hard-of-hearing users means that more people and businesses recognize the need for captioning.
  • Apps like Instagram and TikTok let creators add AI-generated captions to their content, normalizing them for viewers.
  • Streaming makes it easy for people to watch content in public spaces, but they often consume it with the sound off.
  • Trends in sound mixing have made dialogue harder to hear in films and TV.
  • Easier access to foreign-language content for English speakers has made them feel more comfortable with subtitles.

It seems to be a self-perpetuating cycle: as viewers see more content with captions, they become more used to them, so they are more likely to opt for captions the next time.

What’s the Difference Between Captions and Subtitles?

Difference Between Captions and Subtitles

Captions represent the exact words and sounds in an audio track for viewers who cannot hear the audio (or are watching at a low volume). Subtitles are for foreign-language content, so they are the individual lines of dialogue or sound cues translated into the viewer’s native language.

There are also “forced” subtitles, which provide the necessary context for a viewer if dialogue or narration is in a language other than the rest of the program. For example, if a French-language video has a segment with a conversation in Russian, the Russian speech will be subtitled in French by default, whether or not the viewer has toggled on subtitles.

Common File Formats

Both captions and subtitles typically use the same file formats. These file types store two key pieces of information: the text that needs to appear on screen and the exact period of time during which it appears. 

The most common is a SubRip file, which uses the extension .srt. While they are generated by software programs such as Subtitle Workshop, they can also be opened and reviewed in basic text editors such as Notepad. SRT can be used in video editing software such as Final Cut Pro or Premier Pro, as well as video hosting platforms such as YouTube and Vimeo.

Another popular format is Sonic Scenarist Closed Captions (.scc files), which require specialized software for editing. They are more feature-rich, allowing for italics and changes in text colour and positioning. SCC files can be used with many of the same platforms as SRT files.

What You Need to Know About Captions

Captions are used by d/Deaf and hard-of-hearing individuals when they consume video content. It is estimated that over three million Canadians are d/Deaf or hard of hearing, as are more than 48 million Americans.

But other audiences also benefit: non-native speakers can use them as a failsafe in case they do not understand the audio. Viewers also turn on captions when they are in a crowded space and cannot easily hear the audio.

Multiple types of sounds receive captions. Any narration or lines of dialogue should be captioned. When sound effects or pieces of music play an important part in telling a story or give context to visuals, they should also be captioned. 

Season 4 of the Netflix show “Stranger Things” received lots of attention for its descriptive captions, such as [intricate, macabre music playing] and [water gurgling]. Including these captions allowed viewers with hearing impairments to feel just as immersed in the spooky world of the show as those who could hear the audio.

Captions and Accessibility

In the United States, the Americans with Disabilities Act (ADA) is unequivocal: captions are a reasonable accommodation for d/Deaf and hard-of-hearing videos. Private companies must provide captioned videos, including eLearning and training videos.

Canadian accessibility standards vary by province. The Accessibility for Ontarians with Disabilities Act (AODA) requires that video content on publicly accessible websites needs to have captions, except for events that are streamed live.

Other provinces, such as Nova Scotia, are still determining how recently passed legislation about accessibility will apply to different types of content.

Captioning is also a way to make your video content more accessible to non-native speakers who may struggle with unfamiliar accents or rapid speech. When you offer captions, you can reach a broader range of consumers.

Learn about our accessibility compliance testing

How Captions Are Created

Captioners use sophisticated software that allows them to listen to what’s happening during an audio track and then transcribe the words (or describe the sound effect). The software helps them align each individual caption with the corresponding sound so that they are timed to appear on screen only when listeners can hear that sound. 

Captions have some constraints: they need to take into account both reading speed and the space available on the screen. The person creating the captions needs to find creative ways to convey the maximum amount of meaning in the space available.

What You Need to Know About Subtitles

Subtitles are essentially translations created by specialized translators who are fluent in the source language (the audio) and native in the target language (the subtitles).

If you already translate elements of your online presence or localize L&D material into other languages, adding subtitles to your video content will further benefit your audience. For example, you could subtitle promotional videos in Spanish, allowing them to be shared on in-country social media channels.

How Subtitles are Created

A transcriber watches the video and creates a detailed transcript with time stamps that indicate when each line of dialogue or sound cue needs to appear. Then a translator uses that transcript as the basis of their target-language subtitles, referring to the original script and the video content for context. Finally, the subtitles pass through a QA process where they are checked for accuracy and appropriate timing. 

Transcribing before translating the source file makes it easier to create accurate subtitles for multiple languages.

Much like captioners, subtitle translators must pay close attention to the available space and how lines break on the screen. Whenever possible, they try to provide context for concepts unfamiliar to the target-language viewer.

Why You Need Professionally Created Captions and Subtitles

Professionally Created Captions and Subtitles

Many apps and websites, such as Zoom and YouTube, now offer AI-generated captions. While this is faster and cheaper than hiring a captioning professional, AI tools often fail to capture proper nouns, foreign words, or mumbled speech correctly. These tools may also not be trained on accented speech, so they may not be as effective for speakers with foreign or regional accents.

AI-generated captions also include filler words that professionals edit out. Because they capture every word spoken, they often move so quickly that they are difficult to decipher. 

Likewise, there are now tools which allow you to machine translate SRT files with one click. However, these translations do not consider context, which can lead to mistranslations. They may also mistranslate slang and names.

What About Transcription?

Transcription is the process of listening to an audio (or video) file and creating a written document with the same content. For example, many podcasts provide transcripts for listeners to refer to and quote from.

Transcripts are beneficial for several reasons. The first is that they are another way to enhance accessibility for individuals who are d/Deaf and hard of hearing. 

Additionally, when you post transcripts on your website, they are rich with keywords and useful content, which may make them beneficial for SEO. 

Finally, not every potential client prefers consuming video. Time-stamped transcripts make it easy for them to find what they need in a short period of time without feeling like they have to watch a video.

Learn about transcription services at Art One Translations and how you might use them in your business.

Have Content That Needs Captions or Subtitles?

Art One Translations provides subtitles and captions for podcasts, webinars, interviews for IT, marketing, education, health, and many other industries.

Recent subtitling and captioning projects have included a feature-length documentary and videos for a corporate fundraising gala.

To learn more about our subtitling, captioning, and transcription capabilities, and to see additional samples of our work, contact us today.

Comments are closed.

error: Content is protected !!