Visual Interpretation of Film Translation

Author:

Eng Thérèse

Abstract

Which references are considered necessary for understanding and empathy in visual interpretation of translated feature films? This is the starting point for this article on audiovisual translation and visual interpretation. Visual interpretation is a scientifically relatively unexplored field of research that can be linked to both cognitive science, semiotics, and audiovisual translation. Just over a decade ago, there was little or no research into visual interpretation in Sweden or the Nordic countries. The first Swedish research initiatives started in the form of workshops in sight interpretation organized by Jana Holsanova, Mats Andrén and Cecilia Wadensjö (2010-2014) and resulted in a report on sight interpretation (Holsanova et al. 2016). The task of the visual interpreter is to select and describe relevant information such as events, environments, people, characters and their appearance, facial expressions, gestures, and body movements in television programs, cinema, or theater performances by giving verbal descriptions of visual scenes to evoke vivid mental images and audience empathy. Visual interpretation should contribute to our understanding and convey impressions and mood. It is a so-called intermodal translation, because the visual interpreter transfers content from image to words (Jakobson 1959; Reviers 2017). Through the language, those who listen should be able to follow along in the action. But they should not only know what is happening, but also be able to laugh at the same time as everyone else, understand why a certain sound occurs when it is heard and know who is doing what. It is thus about a completion of what is missing in the multimodal interaction (Holsanova 2020: 4). According to professional visual interpreters, the aim is to use a neutral voice to be clear, concise, and descriptive, so that the target group can imagine what something looks like with the help of internal images. In today’s rapid technological development, we also want to reflect on the opportunities and challenges of automated visual interpretation and translation, using ChatGPT.

Publisher

Yerevan State University

Reference14 articles.

1. Holsanova, Jana. 2020b. “Uncovering Scientific and Multimodal Literacy through Audio Description.” Journal of Visual Literacy, Special Issue 39(3). DOI: 10.1080/1051144X.2020.1826219

2. Holsanova, Jana. 2020a. Att beskriva det som syns men inte hörs. Om syntolkning. Humanetten, 44 (2020): 125-146. DOI: 10.15626/hn.20204406

3. Holsanova, Jana, Johansson, Roger, and Lyberg-Åhlander, Viveka. 2020. “How the Blind Audiences Receive and Experience Audio Descriptions of Visual Events.” - a project presentation. Book of Extended Abstracts. 3rd Swiss Conference on Barrier-free Communication, 39-41.

4. Holsanova, Jana. 2016. “Cognitive Approach to Audio Description.” In Researching Audio Description: New Approaches edited by A. Matamala. & P. Orero, 49–73. London: Palgrave Macmillan.

5. Holsanova, Jana, Wadensjö, Cecilia, and Andrén, Mats (Eds.). 2016. “Syntolkning – forskning och praktik.” Lund University Cognitive Studies 166 / Myndigheten för tillgängliga medier, rapport nr. 4.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3