Ever since closed video captioning was developed in the 1970s, it hasn't changed much. The words spoken by the characters or narrators scroll along at the bottom of the screen.
A team of researchers from China and Singapore has developed a new closed captioning approach in which the text appears in translucent talk bubbles next to the speaker. The new approach improves the viewing experience for over 66 million people around the world who have hearing impairments.
They put scripts around the speaker's face and synchronously highlight the scripts. The new technique shows the text appears in different locations and styles to better reflect the speaker's identity and vocal dynamics.
Using a technique called visual saliency analysis, it automatically finds an optimal position for the talk bubble so that it interferes minimally with the visual scene. Professionals can also further adjust the generated captions, such as moving the talk bubbles. When the speaker is off-screen, or a narrator is speaking, the words appear at the bottom of the screen as in static closed captioning.