Following her fascinating intervention at Migrating Texts 2014, in this blog post academic and professional subtitler Lindsay Bywood provides more information about developments in audiovisual translation.
Subtitling has, in recent years, become a somewhat more visible presence on our screens than it once was, thanks in part to the popularity and success of series such as The Killing, Borgen, and Inspector Montalbano to name but a few. It seems that the TV audience in the UK are slowly becoming more willing to consume foreign-language film and television with subtitles. This is extremely good news for practitioners and scholars of audiovisual translation (AVT) who have been working to make video material accessible to speakers of other languages and those with disabilities for some decades now.
The practice of subtitling consists in converting speech to text (with or without translation), but this conversion is by no means simple. Firstly, we can speak faster than we can read, so the subtitle must usually be shorter than the utterance. It is the subtitler’s task to edit the spoken text to a length at which the viewer can read it and still have time to appreciate the visual content offered by the programme producer. Secondly there are technical constraints: foreign-language subtitles are a maximum of two lines, so as not to mask too much of the image, and there is a limit to the length of the text line which can be displayed on the screen. Additionally the subtitler needs to make the text as readable as possible in order to allow the viewer easy access to the meaning of the dialogue. To this end, the subtitler has to follow both semantic and syntactical rules alongside the technical and timing constraints.
The subtitler does have some help, however, as their task differs from that of the text translator in that for the text translator, the target text replaces the source text, whilst for the subtitler, the source text remains unaltered alongside the target text. A film or television programme is a multi-semiotic text, and the viewer receives meaning through the dialogue, the images on screen, the music and sound effects, as well as the subtitles (and any other text on screen).
The biggest difference between translation subtitling and other forms of translation is that it is also the subtitler’s job to decide when the subtitles should be seen on screen, that is, the in-times (when the subtitle appears) and the out-times (when it disappears). A subtitle file consists of text and timecode, and the timecode is what facilitates this process; the timecode in the file corresponds to a timecode carried in the audiovisual material. Here too there are rules which have been developed over time: some through viewer research, and some just through convention.
The other form of subtitling is subtitling for the D/deaf and the hard-of-hearing, which is sometimes called SDH. This is usually done in the same language as the programme. SDH subtitling is the job of someone called a subtitler, who might even be the same person who does translation subtitling. There are some differences in the rules applied to this form of subtitling, though: since research has shown that D/deaf and hard-of-hearing people prefer to have more text rather than less, these subtitles can be longer than two lines, and are often verbatim, that is completely unedited. Also because these particular viewers have little or no access to the aural elements of the audiovisual content, there is a need to indicate other non-verbal noises, such as sound effects, laughter, etc. Additionally this viewer group requires indication of who is speaking, which is done using various methods, including speaker labels, colours, and positioning the subtitle under the mouth of the person speaking.
There is a legal requirement in many European countries for a certain amount of television and film to be made accessible to those with disabilities, and the UK, through Ofcom, leads the way in this respect. As many television programmes are live (not only news, but also sporting events and live entertainment programmes such as The X Factor and I’m a Celebrity… Get Me Out of Here) this poses a problem for the broadcaster: how to subtitle these programmes when even the fastest typist cannot keep up with the speed of an average speaker. As often in these cases, technology provides an answer of sorts. The most recent solution for what is termed ‘live subtitling’ has resulted in the creation of a new professional profile within AVT: the respeaker. The respeaker is trained to listen to the audio of a TV programme, and ‘respeak’ this audio in subtitle form, with punctuation, into a speech recognition system, such as might be used by people who cannot touch type. This system then interfaces with subtitling software to produce the subtitles on screen, providing a faster service than was previously possible. The speech recognition system is trained to recognise the respeaker’s voice and accent, which results in a high level of accuracy. However, the system struggles to deal with homophones, leading to some of the sometimes hilarious mistakes that we see lampooned on social media.
To work as an offline subtitler (that is, not live subtitling as described above) it is usually necessary to have software. Previously such software was either very expensive or the proprietary software of commercial subtitling companies. The market has now expanded and cheaper software exists along with freeware and cloud-based platforms that have all of the essential functionality to work as a subtitler but might lack some of the more sophisticated tools of the more established packages. There are many courses in subtitling, from one-day introductions up to Master’s courses and some companies also offer training to interested parties.
With the explosion in audiovisual content, the interest in and demand for subtitling has increased exponentially, which is seen as a positive development throughout the industry.
About the author
Lindsay Bywood studied German and Philosophy at the University of Oxford and holds an MA in Translation from the University of Salford. She has been working in subtitling since 1998, starting as a subtitler and quickly progressing to senior management. Most recently she was Director of Business Development at VSI, an international subtitling and dubbing company with headquarters in London. Lindsay is currently studying for a PhD in subtitling at CenTraS, University College London. She teaches at MA level and runs workshops in project management, AVT, post-editing, and professional skills for translators. She is a member of ESIST, speaks regularly at translator training events, and has published several papers on subtitling. Her research interests include diachronic variation in the subtitling of foreign films into English, the didactics of translation, machine translation for subtitling, and the interface between academia and industry.