User-Generated Captions: From Hackers, to the Disability Digerati, to Fansubbers

Author:

Hollier Scott,Ellis Katie M,Kent Mike

Abstract

Writing in the American Annals of the Deaf in 1931, Emil S. Ladner Jr, a Deaf high school student, predicted the invention of words on screen to facilitate access to “talkies”. He anticipated:Perhaps, in time, an invention will be perfected that will enable the deaf to hear the “talkies”, or an invention which will throw the words spoken directly under the screen as well as being spoken at the same time. (Ladner, cited in Downey Closed Captioning)This invention would eventually come to pass and be known as captions. Captions as we know them today have become widely available because of a complex interaction between technological change, volunteer effort, legislative activism, as well as increasing consumer demand. This began in the late 1950s when the technology to develop captions began to emerge. Almost immediately, volunteers began captioning and distributing both film and television in the US via schools for the deaf (Downey, Constructing Closed-Captioning in the Public Interest). Then, between the 1970s and 1990s Deaf activists and their allies began to campaign aggressively for the mandated provision of captions on television, leading eventually to the passing of the Television Decoder Circuitry Act in the US in 1990 (Ellis). This act decreed that any television with a screen greater than 13 inches must be designed/manufactured to be capable of displaying captions. The Act was replicated internationally, with countries such as Australia adopting the same requirements with their Australian standards regarding television sets imported into the country. As other papers in this issue demonstrate, this market ultimately led to the introduction of broadcasting requirements.Captions are also vital to the accessibility of videos in today’s online and streaming environment—captioning is listed as the highest priority in the definitive World Wide Web Consortium (W3C) Web Content Accessibility Guideline’s (WCAG) 2.0 standard (W3C, “Web Content Accessibility Guidelines 2.0”). This recognition of the requirement for captions online is further reflected in legislation, from both the US 21st Century Communications and Video Accessibility Act (CVAA) (2010) and from the Australian Human Rights Commission (2014).Television today is therefore much more freely available to a range of different groups. In addition to broadcast channels, captions are also increasingly available through streaming platforms such as Netflix and other subscription video on demand providers, as well as through user-generated video sites like YouTube. However, a clear discrepancy exists between guidelines, legislation and the industry’s approach. Guidelines such as the W3C are often resisted by industry until compliance is legislated.Historically, captions have been both unavailable (Ellcessor; Ellis) and inadequate (Ellis and Kent), and in many instances, they still are. For example, while the provision of captions in online video is viewed as a priority across international and domestic policies and frameworks, there is a stark contrast between the policy requirements and the practical implementation of these captions. This has led to the active development of a solution as part of an ongoing tradition of user-led development; user-generated captions. However, within disability studies, research around the agency of this activity—and the media savvy users facilitating it—has gone significantly underexplored.Agency of ActivityInformation sharing has featured heavily throughout visions of the Web—from Vannevar Bush’s 1945 notion of the memex (Bush), to the hacker ethic, to Zuckerberg’s motivations for creating Facebook in his dorm room in 2004 (Vogelstein)—resulting in a wide agency of activity on the Web. Running through this development of first the Internet and then the Web as a place for a variety of agents to share information has been the hackers’ ethic that sharing information is a powerful, positive good (Raymond 234), that information should be free (Levey), and that to achieve these goals will often involve working around intended information access protocols, sometimes illegally and normally anonymously. From the hacker culture comes the digerati, the elite of the digital world, web users who stand out by their contributions, success, or status in the development of digital technology. In the context of access to information for people with disabilities, we describe those who find these workarounds—providing access to information through mainstream online platforms that are not immediately apparent—as the disability digerati.An acknowledged mainstream member of the digerati, Tim Berners-Lee, inventor of the World Wide Web, articulated a vision for the Web and its role in information sharing as inclusive of everyone:Worldwide, there are more than 750 million people with disabilities. As we move towards a highly connected world, it is critical that the Web be useable by anyone, regardless of individual capabilities and disabilities … The W3C [World Wide Web Consortium] is committed to removing accessibility barriers for all people with disabilities—including the deaf, blind, physically challenged, and cognitively or visually impaired. We plan to work aggressively with government, industry, and community leaders to establish and attain Web accessibility goals. (Berners-Lee)Berners-Lee’s utopian vision of a connected world where people freely shared information online has subsequently been embraced by many key individuals and groups. His emphasis on people with disabilities, however, is somewhat unique. While maintaining a focus on accessibility, in 2006 he shifted focus to who could actually contribute to this idea of accessibility when he suggested the idea of “community captioning” to video bloggers struggling with the notion of including captions on their videos:The video blogger posts his blog—and the web community provides the captions that help others. (Berners-Lee, cited in Outlaw)Here, Berners-Lee was addressing community captioning in the context of video blogging and user-generated content. However, the concept is equally significant for professionally created videos, and media savvy users can now also offer instructions to audiences about how to access captions and subtitles. This shift—from user-generated to user access—must be situated historically in the context of an evolving Web 2.0 and changing accessibility legislation and policy.In the initial accessibility requirements of the Web, there was little mention of captioning at all, primarily due to video being difficult to stream over a dial-up connection. This was reflected in the initial WCAG 1.0 standard (W3C, “Web Content Accessibility Guidelines 1.0”) in which there was no requirement for videos to be captioned. WCAG 2.0 went some way in addressing this, making captioning online video an essential Level A priority (W3C, “Web Content Accessibility Guidelines 2.0”). However, there were few tools that could actually be used to create captions, and little interest from emerging online video providers in making this a priority.As a result, the possibility of user-generated captions for video content began to be explored by both developers and users. One initial captioning tool that gained popularity was MAGpie, produced by the WGBH National Center for Accessible Media (NCAM) (WGBH). While cumbersome by today’s standards, the arrival of MAGpie 2.0 in 2002 provided an affordable and professional captioning tool that allowed people to create captions for their own videos. However, at that point there was little opportunity to caption videos online, so the focus was more on captioning personal video collections offline. This changed with the launch of YouTube in 2005 and its later purchase by Google (CNET), leading to an explosion of user-generated video content online. However, while the introduction of YouTube closed captioned video support in 2006 ensured that captioned video content could be created (YouTube), the ability for users to create captions, save the output into one of the appropriate captioning file formats, upload the captions, and synchronise the captions to the video remained a difficult task.Improvements to the production and availability of user-generated captions arrived firstly through the launch of YouTube’s automated captions feature in 2009 (Google). This service meant that videos could be uploaded to YouTube and, if the user requested it, Google would caption the video within approximately 24 hours using its speech recognition software. While the introduction of this service was highly beneficial in terms of making captioning videos easier and ensuring that the timing of captions was accurate, the quality of captions ranged significantly. In essence, if the captions were not reviewed and errors not addressed, the automated captions were sometimes inaccurate to the point of hilarity (New Media Rock Stars). These inaccurate YouTube captions are colloquially described as craptions. A #nomorecraptions campaign was launched to address inaccurate YouTube captioning and call on YouTube to make improvements.The ability to create professional user-generated captions across a variety of platforms, including YouTube, arrived in 2010 with the launch of Amara Universal Subtitles (Amara). The Amara subtitle portal provides users with the opportunity to caption online videos, even if they are hosted by another service such as YouTube. The captioned file can be saved after its creation and then uploaded to the relevant video source if the user has access to the location of the video content. The arrival of Amara continues to provide ongoing benefits—it contains a professional captioning editing suite specifically catering for online video, the tool is free, and it can caption videos located on other websites. Furthermore, Amara offers the additional benefit of being able to address the issues of YouTube automated captions—users can benefit from the machine-generated captions of YouTube in relation to its timing, then download the captions for editing in Amara to fix the issues, then return the captions to the original video, saving a significant amount of time when captioning large amounts of video content. In recent years Google have also endeavoured to simplify the captioning process for YouTube users by including its own captioning editors, but these tools are generally considered inferior to Amara (Media Access Australia).Similarly, several crowdsourced caption services such as Viki (https://www.viki.com/community) have emerged to facilitate the provision of captions. However, most of these crowdsourcing captioning services can’t tap into commercial products instead offering a service for people that have a video they’ve created, or one that already exists on YouTube. While Viki was highlighted as a useful platform in protests regarding Netflix’s lack of captions in 2009, commercial entertainment providers still have a responsibility to make improvements to their captioning. As we discuss in the next section, people have resorted extreme measures to hack Netflix to access the captions they need. While the ability for people to publish captions on user-generated content has improved significantly, there is still a notable lack of captions for professionally developed videos, movies, and television shows available online.User-Generated Netflix CaptionsIn recent years there has been a worldwide explosion of subscription video on demand service providers. Netflix epitomises the trend. As such, for people with disabilities, there has been significant focus on the availability of captions on these services (see Ellcessor, Ellis and Kent). Netflix, as the current leading provider of subscription video entertainment in both the US and with a large market shares in other countries, has been at the centre of these discussions. While Netflix offers a comprehensive range of captioned video on its service today, there are still videos that do not have captions, particularly in non-English regions. As a result, users have endeavoured to produce user-generated captions for personal use and to find workarounds to access these through the Netflix system. This has been achieved with some success.There are a number of ways in which captions or subtitles can be added to Netflix video content to improve its accessibility for individual users. An early guide in a 2011 blog post (Emil’s Celebrations) identified that when using the Netflix player using the Silverlight plug-in, it is possible to access a hidden menu which allows a subtitle file in the DFXP format to be uploaded to Netflix for playback. However, this does not appear to provide this file to all Netflix users, and is generally referred to as a “soft upload” just for the individual user. Another method to do this, generally credited as the “easiest” way, is to find a SRT file that already exists for the video title, edit the timing to line up with Netflix, use a third-party tool to convert it to the DFXP format, and then upload it using the hidden menu that requires a specific keyboard command to access. While this may be considered uncomplicated for some, there is still a certain amount of technical knowledge required to complete this action, and it is likely to be too complex for many users.However, constant developments in technology are assisting with making access to captions an easier process. Recently, Cosmin Vasile highlighted that the ability to add captions and subtitle tracks can still be uploaded providing that the older Silverlight plug-in is used for playback instead of the new HTML5 player. Others add that it is technically possible to access the hidden feature in an HTML5 player, but an additional Super Netflix browser plug-in is required (Sommergirl). Further, while the procedure for uploading the file remains similar to the approach discussed earlier, there are some additional tools available online such as Subflicks which can provide a simple online conversion of the more common SRT file format to the DFXP format (Subflicks). However, while the ability to use a personal caption or subtitle file remains, the most common way to watch Netflix videos with alternative caption or subtitle files is through the use of the Smartflix service (Smartflix). Unlike other ad-hoc solutions, this service provides a simplified mechanism to bring alternative caption files to Netflix. The Smartflix website states that the service “automatically downloads and displays subtitles in your language for all titles using the largest online subtitles database.”This automatic download and sharing of captions online—known as fansubbing—facilitates easy access for all. For example, blog posts suggest that technology such as this creates important access opportunities for people who are deaf and hard of hearing. Nevertheless, they can be met with suspicion by copyright holders. For example, a recent case in the Netherlands ruled fansubbers were engaging in illegal activities and were encouraging people to download pirated videos. While the fansubbers, like the hackers discussed earlier, argued they were acting in the greater good, the Dutch antipiracy association (BREIN) maintained that subtitles are mainly used by people downloading pirated media and sought to outlaw the manufacture and distribution of third party captions (Anthony). The fansubbers took the issue to court in order to seek clarity about whether copyright holders can reserve exclusive rights to create and distribute subtitles. However, in a ruling against the fansubbers, the court agreed with BREIN that fansubbing violated copyright and incited piracy. What impact this ruling will have on the practice of user-generated captioning online, particularly around popular sites such as Netflix, is hard to predict; however, for people with disabilities who were relying on fansubbing to access content, it is of significant concern that the contention that the main users of user-generated subtitles (or captions) are engaging in illegal activities was so readily accepted.ConclusionThis article has focused on user-generated captions and the types of platforms available to create these. It has shown that this desire to provide access, to set the information free, has resulted in the disability digerati finding workarounds to allow users to upload their own captions and make content accessible. Indeed, the Internet and then the Web as a place for information sharing is evident throughout this history of user-generated captioning online, from Berner-Lee’s conception of community captioning, to Emil and Vasile’s instructions to a Netflix community of captioners, to finally a group of fansubbers who took BRIEN to court and lost. Therefore, while we have conceived of the disability digerati as a conflation of the hacker and the acknowledged digital influencer, these two positions may again part ways, and the disability digerati may—like the hackers before them—be driven underground.Captioned entertainment content offers a powerful, even vital, mode of inclusion for people who are deaf or hard of hearing. Yet, despite Berners-Lee’s urging that everything online be made accessible to people with all sorts of disabilities, captions were not addressed in the first iteration of the WCAG, perhaps reflecting the limitations of the speed of the medium itself. This continues to be the case today—although it is no longer difficult to stream video online, and Netflix have reached global dominance, audiences who require captions still find themselves fighting for access. Thus, in this sense, user-generated captions remain an important—yet seemingly technologically and legislatively complicated—avenue for inclusion.ReferencesAnthony, Sebastian. “Fan-Made Subtitles for TV Shows and Movies Are Illegal, Court Rules.” Arstechnica UK (2017). 21 May 2017 <https://arstechnica.com/tech-policy/2017/04/fan-made-subtitles-for-tv-shows-and-movies-are-illegal/>.Amara. “Amara Makes Video Globally Accessible.” Amara (2010). 25 Apr. 2017. <https://amara.org/en/ 2010>.Berners-Lee, Tim. “World Wide Web Consortium (W3C) Launches International Web Accessibility Initiative.” Web Accessibility Initiative (WAI) (1997). 19 June 2010. <http://www.w3.org/Press/WAI-Launch.html>.Bush, Vannevar. “As We May Think.” The Atlantic (1945). 26 June 2010 <http://www.theatlantic.com/magazine/print/1969/12/as-we-may-think/3881/>.CNET. “YouTube Turns 10: The Video Site That Went Viral.” CNET (2015). 24 Apr. 2017 <https://www.cnet.com/news/youtube-turns-10-the-video-site-that-went-viral/>.Downey, Greg. Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television. Baltimore: John Hopkins UP, 2008.———. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” Info: The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82.Ellcessor, Elizabeth. “Captions On, Off on TV, Online: Accessibility and Search Engine Optimization in Online Closed Captioning.” Television & New Media 13.4 (2012): 329-352. <http://tvn.sagepub.com/content/early/2011/10/24/1527476411425251.abstract?patientinform-links=yes&legid=sptvns;51v1>.Ellis, Katie. “Television’s Transition to the Internet: Disability Accessibility and Broadband-Based TV in Australia.” Media International Australia 153 (2014): 53–63.Ellis, Katie, and Mike Kent. “Accessible Television: The New Frontier in Disability Media Studies Brings Together Industry Innovation, Government Legislation and Online Activism.” First Monday 20 (2015). <http://firstmonday.org/ojs/index.php/fm/article/view/6170>.Emil’s Celebrations. “How to Add Subtitles to Movies Streamed in Netflix.” 16 Oct. 2011. 9 Apr. 2017 <https://emladenov.wordpress.com/2011/10/16/how-to-add-subtitles-to-movies-streamed-in-netflix/>.Google. “Automatic Captions in Youtube.” 2009. 24 Apr. 2017 <https://googleblog.blogspot.com.au/2009/11/automatic-captions-in-youtube.html>.Jaeger, Paul. “Disability and the Internet: Confronting a Digital Divide.” Disability in Society. Ed. Ronald Berger. Boulder, London: Lynne Rienner Publishers, 2012.Levey, Steven. Hackers: Heroes of the Computer Revolution. North Sebastopol: O’Teilly Media, 1984.Media Access Australia. “How to Caption a Youtube Video.” 2017. 25 Apr. 2017 <https://mediaaccess.org.au/web/how-to-caption-a-youtube-video>.New Media Rock Stars. “Youtube’s 5 Worst Hilariously Catastrophic Auto Caption Fails.” 2013. 25 Apr. 2017 <http://newmediarockstars.com/2013/05/youtubes-5-worst-hilariously-catastrophic-auto-caption-fails/>.Outlaw. “Berners-Lee Applies Web 2.0 to Improve Accessibility.” Outlaw News (2006). 25 June 2010 <http://www.out-law.com/page-6946>.Raymond, Eric S. The New Hacker’s Dictionary. 3rd ed. Cambridge: MIT P, 1996.Smartflix. “Smartflix: Supercharge Your Netflix.” 2017. 9 Apr. 2017 <https://www.smartflix.io/>.Sommergirl. “[All] Adding Subtitles in a Different Language?” 2016. 9 Apr. 2017 <https://www.reddit.com/r/netflix/comments/32l8ob/all_adding_subtitles_in_a_different_language/>.Subflicks. “Subflicks V2.0.0.” 2017. 9 Apr. 2017 <http://subflicks.com/>.Vasile, Cosmin. “Netflix Has Just Informed Us That Its Movie Streaming Service Is Now Available in Just About Every Country That Matters Financially, Aside from China, of Course.” 2016. 9 Apr. 2017 <http://news.softpedia.com/news/how-to-add-custom-subtitles-to-netflix-498579.shtml>.Vogelstein, Fred. “The Wired Interview: Facebook’s Mark Zuckerberg.” Wired Magazine (2009). 20 Jun. 2010 <http://www.wired.com/epicenter/2009/06/mark-zuckerberg-speaks/>.W3C. “Web Content Accessibility Guidelines 1.0.” W3C Recommendation (1999). 25 Jun. 2010 <http://www.w3.org/TR/WCAG10/>.———. “Web Content Accessibility Guidelines (WCAG) 2.0.” 11 Dec. 2008. 21 Aug. 2013 <http://www.w3.org/TR/WCAG20/>.WGBH. “Magpie 2.0—Free, Do-It-Yourself Access Authoring Tool for Digital Multimedia Released by WGBH.” 2002. 25 Apr. 2017 <http://ncam.wgbh.org/about/news/pr_05072002>.YouTube. “Finally, Caption Video Playback.” 2006. 24 Apr. 2017 <http://googlevideo.blogspot.com.au/2006/09/finally-caption-playback.html>.

Publisher

Queensland University of Technology

Subject

General Medicine

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3