Subtitling multilingual films: Options vs Constraints

At the third instalment of Migrating Texts on 11 November 2016, Dr Irene de Higes Andino (Universitat Jaume I) both presented her research and ran a hands-on activity on subtitling multilingual films. In this blog, she explores some of the key points she shared with us on the day.

Subtitling multilingual films: Options vs Constraints

Although the coexistence of languages in audiovisual texts is not a new phenomenon, as Jean François Cornu made clear during his presentation at the Migrating Texts 3 event, the reality is that multilingualism is more and more present nowadays in films and TV series. Its translation is a challenge for subtitlers as the presence of a third language (or languages) (L3, following terminology by Corrius 2008, Zabalbeascoa & Corrius 2012 and de Higes Andino 2014) increases the traditional process of translation. In the case of a multilingual discourse, it is not just a monolingual discourse (in L1) translated into another monolingual discourse (in L2). What happens with L3 during the translation process?

As I did during my presentation and the hands-on activity at the Migrating Texts event last November, here I will reflect on the process of subtitling multilingual films.

In my conception, a multilingual film is any audiovisual text (TV series, feature film, short film, documentary, etc.) including more than one (live, dead or invented) language (see a summary of variables for L3 in Zabalbeascoa 2012). Any film from the Star Wars saga is a multilingual film, as are the documentary Do You Really Love Me? (Alistair Cole 2011) and the animation TV series Dora the Explorer (Nickelodeon Studios 2000- ).

One characteristic of multilingualism in films is that language diversity makes translation visible. Sometimes intradiegetically (Cronin 2009), when translation becomes part of the story and serves as a communication tool among characters; i.e. when a character acts as an interpreter – for example, in It’s a Free World… (Ken Loach 2007). In other occasions filmmakers find it necessary to translate L3 extradiegetically, that is, translation is added when editing the film for the audience to understand L3 (Cronin 2009). The most common solution is to add subtitles during the post-production process – as in Provoked: A True Story (Jag Mundhra 2006). Otherwise an off-camera voice may translate L3 dialogues through voice-over – e.g., in Spanglish (James L Brooks 2005). Finally translation is conspicuous by its absence when no translation is provided for L3 dialogues. In doing so the audience is faced with a sense of incomprehension which might be similar to the one felt by characters. Later I will explain how the (in)visibility of translation in the text to be translated usually affects its translation.

Let’s now reflect on the subtitling process, an industrial process in which different human agents are involved. Not only does the subtitler participate in it with his/her translation, but final subtitles are the product of a process initiated and supervised by the distributor. The subtitler task is thus open to options but also limited by constraints.

When translating a multilingual film, subtitlers have different options; they need to decide which subtitling convention to apply, mainly based on the translation strategy chosen. To do so, first, they usually reflect on the function multilingualism has in the text. The presence of an L3 usually renders realism in the sense of representing language diversity in a society, but it may have a different function too. Sometimes it might just mark characters as an Other, or emphasise an identity or even voice social criticism. It may also be a vehicle for humour or be used to create confusion (cf. de Higes Andino 2014).

Once the importance of multilingualism is determined, it is time for the subtitler to decide the translation strategy (Bartoll 2006): Will multilingualism be marked or not marked? The following table, based on de Higes Andino 2014, shows how the translation strategy is reflected in the film by the language and/or the typographical convention used in subtitles. Two more conventions are included: the combination of modes (for example, in one same scene some L3 dialogues may be subtitled but some others may be not) or by the absence of subtitles (be it because L3 is not to be translated or because L3 coincides with L1 dialogues, which are the ones subtitled due to relevance principles).

Translation mode Conventions Translation strategies
Multilingualism is marked Multilingualism is not marked
Subtitling Language L1 x
L2 x
L3 (same or different) x
Interlanguage x
Typography Box x
Brackets x
Capital letters x
Change of positioning x
Colours x
Italics x
Label x
Quotation marks x
Square brackets x
No special typography x
Combination of modes x
Non-translation Absence of subtitles x (if no subtitles are on screen) x (if dialogues in L3 overlap with L1)

As the text to be translated is audiovisual, the decision to apply one option or another may be based on constraints. A multilingual film might be technically manipulated (Díaz Cintas 2012) when the way L3 dialogues are subtitled is restricted by textual constraints (Martí Ferriol 2010):

  • Formal constraints (related with the conventions of subtitling)
  • Linguistic constraints (related with the languages used in the text)
  • Semiotic and iconic constraints (related with semiotics and visual elements)
  • Socio-cultural constraints (related with the cultures represented in the text)

According to the results from de Higes Andino (2014), no clear trend is generally observed in the translation of L3 despite the absence or presence of constraints. It seems to depend much on extra-linguistic factors, except on one occasion. L3 is frequently not marked when subtitlers have to face formal constraints implying text reduction (voices in the distance, long shots, dialogue overlapping or L3 overlapping with the music code). In their absence, on the contrary, the percentage of samples marking multilingualism increases considerably. That particularly technical constraint does seem to limit the subtitler task.

However, this is not the only constraint determining the subtitler’s option. Extra-linguistic decisions may produce an ideological manipulation of the text (Díaz Cintas 2012). First, the original intention of filmmakers is much taken into account. From the interviews carried out during my research, the following conclusion was drawn: Linguistic representation of migrant characters affects L3 subtitling as L3 dialogues subtitled into L1 by the filmmaker are nearly the only ones to be translated into the target language.

Second, the subtitler task is often restricted by the other human agents participating in the process. The distributor, as the initiator of the commission, may limit his/her options. Distributors interviewed for my research were against the use of typography to mark multilingualism because of artistic reasons related to the readability of subtitles. On the one hand, they thought that the audience may not accept or understand typographical syntax used to visually mark multilingualism. On the other hand, they were convinced it is unnecessary as the audience may be able to detect code-switching through the soundtrack.

The final conclusion of my descriptive and comparative research on the subtitling of multilingual British films into Spanish is that generally no clear trend is observed in the subtitling of L3. Due to the increasing amount of multilingual audiovisual texts, it is my aim, however, to show professional subtitlers and future professionals which different options they have and which textual constraints and extra-linguistic decisions may determine the final solution. In doing so, they would be capable of distinguishing the function of multilingualism, justify their decision on the translation strategy and try to convince filmmakers and distributors on new language and typographical conventions marking L3.

References

Bartoll, E. (2006). Subtitling multilingual films. In M. Carroll, H. Gerzymisch-Arbogast, & S. Nauert (Eds.), Proceedings of the Marie Curie Euroconferences MuTra: Audiovisual Translation Scenarios, Copenhagen 1-5 May 2006.

Cronin, M. (2009). Translation goes to the Movies. London and New York: Routledge.

Corrius, M. (2008). Translating Multilingual Audiovisual Texts. Priorities, Restrictions, Theoretical Implications (PhD dissertation). Universitat Autònoma de Barcelona, Barcelona.

de Higes Andino, I. (2014). Estudio descriptivo y comparativo de la traducción de filmes plurilingües: el caso del cine británico de migración y diáspora (PhD dissertation). Universitat Jaume I, Castelló de la Plana. http://tdx.cat/handle/10803/144753

Díaz Cintas, J. (2012). Clearing the Smoke to See the Screen: Ideological Manipulation in Audiovisual Translation. Meta, 57(2), 279-293.

Martí Ferriol, J. L. (2010). Cine independiente y traducción. Valencia: Tirant lo Blanch.

Zabalbeascoa, P. (2012). Translating Heterolingual Audiovisual Humor: Beyond the Blinkers of Traditional Thinking. In J. Muñoz-Basols, C. Fouto, L. Soler González, & T. Fisher (Eds.), The Limits of Literary Translation: Expanding Frontier in Iberian Languages (pp. 317–338). Kassel: Reichenberger.

Zabalbeascoa, P., & Corrius, M. (2012). How Spanish in an American film is rendered in translation: dubbing Butch Cassidy and the Sundance Kid in Spain. Perspectives.

 

Developments in subtitling by Lindsay Bywood

Following her fascinating intervention at Migrating Texts 2014, in this blog post academic and professional subtitler Lindsay Bywood provides more information about developments in audiovisual translation.

Subtitling has, in recent years, become a somewhat more visible presence on our screens than it once was, thanks in part to the popularity and success of series such as The Killing, Borgen, and Inspector Montalbano to name but a few. It seems that the TV audience in the UK are slowly becoming more willing to consume foreign-language film and television with subtitles. This is extremely good news for practitioners and scholars of audiovisual translation (AVT) who have been working to make video material accessible to speakers of other languages and those with disabilities for some decades now.

The practice of subtitling consists in converting speech to text (with or without translation), but this conversion is by no means simple. Firstly, we can speak faster than we can read, so the subtitle must usually be shorter than the utterance. It is the subtitler’s task to edit the spoken text to a length at which the viewer can read it and still have time to appreciate the visual content offered by the programme producer. Secondly there are technical constraints: foreign-language subtitles are a maximum of two lines, so as not to mask too much of the image, and there is a limit to the length of the text line which can be displayed on the screen. Additionally the subtitler needs to make the text as readable as possible in order to allow the viewer easy access to the meaning of the dialogue. To this end, the subtitler has to follow both semantic and syntactical rules alongside the technical and timing constraints.

The subtitler does have some help, however, as their task differs from that of the text translator in that for the text translator, the target text replaces the source text, whilst for the subtitler, the source text remains unaltered alongside the target text. A film or television programme is a multi-semiotic text, and the viewer receives meaning through the dialogue, the images on screen, the music and sound effects, as well as the subtitles (and any other text on screen).

The biggest difference between translation subtitling and other forms of translation is that it is also the subtitler’s job to decide when the subtitles should be seen on screen, that is, the in-times (when the subtitle appears) and the out-times (when it disappears). A subtitle file consists of text and timecode, and the timecode is what facilitates this process; the timecode in the file corresponds to a timecode carried in the audiovisual material. Here too there are rules which have been developed over time: some through viewer research, and some just through convention.

The other form of subtitling is subtitling for the D/deaf and the hard-of-hearing, which is sometimes called SDH. This is usually done in the same language as the programme. SDH subtitling is the job of someone called a subtitler, who might even be the same person who does translation subtitling. There are some differences in the rules applied to this form of subtitling, though: since research has shown that D/deaf and hard-of-hearing people prefer to have more text rather than less, these subtitles can be longer than two lines, and are often verbatim, that is completely unedited. Also because these particular viewers have little or no access to the aural elements of the audiovisual content, there is a need to indicate other non-verbal noises, such as sound effects, laughter, etc. Additionally this viewer group requires indication of who is speaking, which is done using various methods, including speaker labels, colours, and positioning the subtitle under the mouth of the person speaking.

There is a legal requirement in many European countries for a certain amount of television and film to be made accessible to those with disabilities, and the UK, through Ofcom, leads the way in this respect. As many television programmes are live (not only news, but also sporting events and live entertainment programmes such as The X Factor and I’m a Celebrity… Get Me Out of Here) this poses a problem for the broadcaster: how to subtitle these programmes when even the fastest typist cannot keep up with the speed of an average speaker. As often in these cases, technology provides an answer of sorts. The most recent solution for what is termed ‘live subtitling’ has resulted in the creation of a new professional profile within AVT: the respeaker. The respeaker is trained to listen to the audio of a TV programme, and ‘respeak’ this audio in subtitle form, with punctuation, into a speech recognition system, such as might be used by people who cannot touch type. This system then interfaces with subtitling software to produce the subtitles on screen, providing a faster service than was previously possible. The speech recognition system is trained to recognise the respeaker’s voice and accent, which results in a high level of accuracy. However, the system struggles to deal with homophones, leading to some of the sometimes hilarious mistakes that we see lampooned on social media.

 year of the whores

 To work as an offline subtitler (that is, not live subtitling as described above) it is usually necessary to have software. Previously such software was either very expensive or the proprietary software of commercial subtitling companies. The market has now expanded and cheaper software exists along with freeware and cloud-based platforms that have all of the essential functionality to work as a subtitler but might lack some of the more sophisticated tools of the more established packages. There are many courses in subtitling, from one-day introductions up to Master’s courses and some companies also offer training to interested parties.

With the explosion in audiovisual content, the interest in and demand for subtitling has increased exponentially, which is seen as a positive development throughout the industry.

About the author

Lindsay Bywood studied German and Philosophy at the University of Oxford and holds an MA in Translation from the University of Salford. She has been working in subtitling since 1998, starting as a subtitler and quickly progressing to senior management. Most recently she was Director of Business Development at VSI, an international subtitling and dubbing company with headquarters in London. Lindsay is currently studying for a PhD in subtitling at CenTraS, University College London. She teaches at MA level and runs workshops in project management, AVT, post-editing, and professional skills for translators. She is a member of ESIST, speaks regularly at translator training events, and has published several papers on subtitling. Her research interests include diachronic variation in the subtitling of foreign films into English, the didactics of translation, machine translation for subtitling, and the interface between academia and industry.

www.ucl.ac.uk/centras/phd-studies/LindsayBywood

Migrating Texts recap: Subtitling

Over two days, Migrating Texts brought together 17 expert speakers, four panel chairs, and a wide range of attendees from academia and the cultural and creative industries to discuss subtitling, translation and adaptation. We’re incredibly grateful to all those who came and contributed to the conversation. For those who couldn’t make it, over three blogs we’ll be bringing you the main points from each session. First up: subtitling.

L-R: Dr Laura Incalcaterra McLoughlin (NUI Galway),  Prof Kirsten Malmkjær (University of Leicester), Dr Huw Jones (MeCETES, University of York), Dr Sonali Joshi (Day for Night), Lindsay Bywood (UCL/professional subtitler).

L-R: Dr Laura Incalcaterra McLoughlin, Prof Kirsten Malmkjær, Dr Huw Jones, Dr Sonali Joshi, Lindsay Bywood.

The afternoon began with a session entitled ‘Subtitling and Foreign-Language Teaching & Learning’. Our first speaker, Prof Kirsten Malmkjaer (University of Leciester), laid the foundations for the rest of the afternoon by introducing the Translation and Language Learning report, funded by the European Commission The key question is how do we order the main language competences – reading, writing, speaking and listening – and where does translation fit in to this? While translation of large chunks of texts was the main pedagogical tool in Greek and Latin classrooms, it has fallen out of favour in recent years, as pressure increases for language student to learn ‘communication’ skills. As Prof Malmkjaer asks, what is translation if not communication? Or, what is communication if not translation? ‘Communication’ today seems to mean language skills that students can use in business or travel, rather than gaining deeper understanding of other cultures.

The report spans a variety of EU and non-EU countries, with a literature review, analysis of policy, and surveys of language teachers. In addition, the team carried out focus groups in Leicester and Tarragona. Prof Malmkjaer highlighted that there is no common European policy for language learning, and only the European Common Framework mentions translation and interpreting as language competences. Somewhat surprisingly, the UK stands out among EU and other countries in having translation as a key part of language learning (although with more focus on accuracy than fluidity of expression or creativity). The statutory new curriculum introduced in the UK in 2014 includes translation in a more rigorous programme of language learning. However, as the report outlines, while translation can be a very useful tool for language learning, there is still a fear among teachers of using it, especially in multilingual classrooms. While the EU has published the report, as education is a national competence, they can only suggest guidelines and hope that national governments incorporate them into policy decisions.

Our next speaker, professional subtitler and UCL PhD student Lindsay Bywood, presented ‘everything you need to know about subtitling’. She reminded us that while we usually think of subtitling as audiovisual translation, there is also surtitling, subtitling for the deaf and hard-of-hearing and live subtitling (for news broadcasts, for example). In general, most Western and Central European countries dub whereas the UK and Scandinavia subtitle: the factors include community size, cost and speed. In the Arab world, it was always subtitling, but taking literacy issues into account, dubbing is on the rise. Subtitled media have only recently become popular in the UK. It began with Amélie in 2001, but the real boom came with The Killing in 2011: “All of a sudden my friends were asking me about my job”. Lindsay suggested that with smartphones we’re all more used to text as entertainment, and it’s also cheaper to buy and subtitle programming than to produce new.

Lindsay explained some of the practicalities of subtitling. We can listen to a lot more text than we can read, and the text needs to fit on the screen, so we have to condense meaning. While certain rules have evolved over time (text should be on screen for 1-6 seconds, text shouldn’t cover shot changes etc), she maintained that more research is needed to see how these rules work for real viewers. ‘Respeaking’ is the new method of subtitling, using speech recognition software, although homophones can cause problems. It usually takes 3-5 days to subtitle a film well, but different markets, especially different countries, require different standards.

The final presentation of the session was ‘Subtitling in the Language Classroom’ from Dr Laura Incalcaterra McLoughlin (NUI Galway). Like Prof Malmkjaer, Dr Incalcaterra highlighted how “Translation is actively discouraged” in classrooms as it’s seen to limit communication, but translation is communication. Translation has been criticised for being text bound, monosemiotic. Today we use more multimedial, polysemiotic texts, and that is where subtitling comes in. Subtitling involves reflection, problem solving and flexibility, working out how the language fits with images and sound. Using films in classrooms also encourages intercultural learning, reflecting on cultural differences. Subtitling can also promote literacy in general, and visual and audiovisual literacy, reading symbols and body language. Subtitled videos can be shared and shown to their peers, allowing for peer review much more than written translation does. ClipFair is a free online platform, sponsored by the EU, for subtitling, dubbing and audiodescribing, perfect for use in language classrooms.

Our second session of the afternoon, ‘Foreign-Language Film Distribution and TV Programming in the UK’, began with Dr Huw Jones (University of York) introducing his work for MeCETES on the market for foreign language films in the UK. You can download Dr Jones’ whole presentation here. Britain has the lowest proportion of foreign lang film viewers. Only 5% say they watch foreign lang films regularly. Why are people put off foreign films? We don’t like subtitles, they’re too ‘arty’, bad acting, cultural prejudices, limited availability, and a “characteristically insular mindset” which not only stops us learning languages but makes us reject foreign products all contribute. However: 25% of Britons surveyed say there are not enough foreign-lang films in the UK, especially as foreign language films are being squeezed out by American arthouse films and live theatre screenings. Cinemas are also not catering for the new language markets; for example, there is a big disparity between the number of Polish speakers in the UK and Polish film screenings.The typical foreign language film fans are young, urban, educated, earn less than £30k, but have high cultural capital. MeCETES are trying to better understand the market to give policy recommendations to the EU, who currently support the mobility of foreign films through the MEDIA programme. In the following discussion, Paul Kaye, from the DGT at the European Commission added that films should be made available with subtitles in the original language to help language learners.

Finally, Dr Sonali Joshi, founder of Day For Night, gave the industry perspective on subtitling and distribution. Day For Night has been operating for about a year, bringing films they’ve found at festivals to the UK and Ireland. They also subtitle their own films in house as well as work for art galleries, factual programming and films for other companies. Dr Joshi explained how films were released in 33 languages in the UK in 2012 but only 1 or 2 per language. The greatest number of foreign releases are in French, but the biggest box office draw is Hindi. The decreasing distinction between the programming at independent and chain cinemas means less space for foreign films, which is why Day For Night screen films in non-traditional places like universities and galleries. Dr Joshi suggested that we need more film education to encourage young people to watch foreign films, like the BBC 2 series Moviedrome did from 1988-2000. When asked about whether Video On Demand makes it easier to distribute foreign language films, Dr Joshi replied that small films can get completely lost on the big VOD platforms like Netflix, but curated platforms like Mubi can be a good alternative.

The afternoon ended with a round table between all speakers. We discussed how there is often no subtitling budget because people don’t think of it until too late, resulting in a loss of quality and communication. The consensus was it’s very odd most directors don’t care about subtitles when they govern how most people experience the film. Dr Joshi explained that most directors rely on favours to get films subtitled for festivals, and maintained that  festivals should insist on a standard of subtitling. Our speakers also all agreed that there is space in academia to develop new modules that bring together subtitling skills and cultural film studies.

Accessible Filmmaking by Dr Pablo Romero Fresco

To tie in with Migrating Texts, we will post a range of blogs and articles by experts (academics and industry professionals) in subtitling, translation and intermedial adaptation. We are delighted that our first guest blogger is Dr Pablo Romero Fresco from the University of Roehampton.

The numbers tell a sad story. Almost 60% of the revenue obtained by the leading top-grossing films made in Hollywood in the last decade comes from the translated (subtitled or dubbed) or accessible (with subtitles for the deaf or audiodescription for the blind) versions of those films, and yet only between 0.1% and 1% of their budgets is usually devoted to translation and accessibility. Relegated to the distribution stage as an afterthought in the filmmaking process, translators have to translate films in very limited time, for a small remuneration and with no access to the creative team of the films. This may be seen as a profitable model for the film industry, but more than a decade of research in audiovisual translation has shown that it may also have a very negative impact on the quality and reception of translated films. In fact, renowned filmmakers such as Ken Loach are now beginning to denounce that this model often results in the alteration of their film’s vision and that, even more worryingly, they are not always aware of this.

As a potential way to tackle this problem, accessible filmmaking attempts to integrate audiovisual translation and accessibility as part of the filmmaking process through collaboration between filmmakers and translators. The aim is to apply this model to training, research and practice, and the first steps have already been taken. From the point of view of training, some audiovisual translation courses now include film content as part of their syllabus. This is the case of the MA in Media Accessibility at the University of Macerata or the MA in Accessible Filmmaking at the University of Roehampton (London), where students learn not only how to make films but also how to make them accessible to viewers in other languages and viewers with hearing and visual loss. Whether or not these students will end up working in the film industry, they will be able to speak the same language as filmmakers, which will facilitate their collaboration for the translation of the films. Another potential step in this direction is to establish links between film schools and translation institutions so that audiovisual translation students can subtitle or dub for their assignments real short films made by student filmmakers. This may be more satisfying than translating clips from films that have already been translated and will also foster the spirit of collaboration between both areas.

As far as research is concerned, three new avenues in audiovisual translation are already looking at the common ground between film(making) and translation: universal design applied to media accessibility, part-subtitling and creative subtitling. All three are examples of accessible filmmaking, which could help filmmakers and film scholars explore the aspects of audiovisual translation and accessibility that have an impact on the reception of their (translated) films and audiovisual translation scholars and translators identify the elements from filmmaking and film studies that can contribute to the theory and practice of translation. In this sense, it is worth considering the often-overlooked research on translation carried out by filmmakers such as the scholar and documentarian David Mac Dougall, for whom subtitling “remains part of the creative process, influencing the pacing and rhythm of the film as well as its intellectual and emotional content”. Now that research on audiovisual translation has come of age and has begun to delve into reception studies, it is in an optimum position to consider the practical and theoretical implications of strengthening its links with film studies and filmmaking.

Finally, if it is to be presented as a realistic alternative to the current consideration of audiovisual translation as an afterthought in the filmmaking process, accessible filmmaking must also be applicable in the professional practice. A first example of this is the short documentary about blindness and audiodescription Joining the Dots. At the University of Roehampton, the collaboration between translators and the creative team of the film is set as a requirement for filmmakers who wish to have their films translated or made accessible. This collaboration may range from a couple of meetings between the filmmaker and the translator to a more thorough involvement as part of the post-production process. Examples are Michael Chanan’s Secret City (2012), Enrica Colusso’s Home Sweet Home (2012) or the award-winning documentary Hijos de las nubes (2010), directed by Alvaro Longoria and produced by the Spanish actor Javier Bardem. Outside Roehampton, companies such as Subtrain and Sub-ti are also beginning to apply this model. Furthermore, independent filmmakers such as Alastair Cole, whose award-winning films have been presented at the Cannes Film Festival Critics’ Week, are now working with a new figure, the producer of accessibility, who acts as liaison between the filmmaker and the translators ensuring that they have access to the creative team of the film for their translation.

As well as the examples mentioned here with regard to training, research and practice, other initiatives have been launched to raise the visibility of accessible filmmaking in academic and non-academic circles, including presentations in film festivals (Venice 2012 and 2013, Edinburgh 2013) to reach professionals in the film industry, a first academic article, a dedicated website and a special item on the Spanish newspaper El País about the application of accessible filmmaking in developing countries.

In an increasingly multilingual society where film co-productions are becoming more and more common, translation has a key role to play. The integration of accessibility and AVT as part of the filmmaking process through the collaboration between filmmakers and translators can help ensure that the filmmakers’ visions are not altered when their films reach foreign audiences and viewers with hearing and visual loss. Time will tell whether or not it is possible to present alternative models to the current consideration of audiovisual translation and accessibility as an afterthought in the film industry, but the fact that accessible filmmaking is already being applied at grassroots level and in independent films provides encouragement to keep pursuing this cause.

About the author

Pablo Romero Fresco is a Reader in Translation and Filmmaking at the University of Roehampton, where he teaches Filmmaking, Dubbing, Subtitling and Respeaking. He also teaches at the MAs on Audiovisual Translation at Universidad Autònoma de Barcelona and University of Vigo (Spain). He is the author of the book Subtitling through Speech Recognition: Respeaking (St Jerome) and is Ofcom’s external reviewer to assess the quality of live subtitles in the UK. He has collaborated with Stagetext and the National Gallery in the UK to provide access to live events in museums and galleries for deaf and hard of hearing people and with North-West University, in South Africa, to use respeaking as a tool for social integration in the classroom. He is a member of the first World-wide Focus Group on Audiovisual Media Accessibility organised by the United Nation’s ITU and of the research group CAIAC/Transmedia Catalonia, for which he has coordinated the subtitling part of the EU-funded project DTV4ALL.

Pablo is also a filmmaker. His first documentary, Joining the Dots (2012), about blindness and audiodescription, was screened during the 69th Venice Film Festival and selected for the 2012 London Spanish Film Festival, the 12th International Human Rights Film Festival Watch Docs (Poland) and the 2014 Look & Roll Film Festival on Disabilities (Switzerland). His second documentary, Brothers and Sisters (2012), about education in Kibera (Kenya), was broadcast online by the Spanish newspaper El País in 2013 along with the feature article Levantarse en Kibera and the short film Joel (2012).