| CARVIEW |
In conjunction with ECCV 2022
Sign languages are spatial-temporal languages and constitute a key form of communication for Deaf communities. Recent progress in fine-grained gesture and action classification, machine translation and image captioning, point to the possibility of automatic sign language understanding becoming a reality. The study of isolated sign recognition has a rich history in the computer vision community stretching back over thirty years. Thanks to the recent availability of larger datasets, researchers are now focusing on continuous sign language recognition, sentence alignment to continuous signing and sign language translation. Advances in generative networks are also enabling progress on sign language production, where written language is converted into sign language video.
The "Sign Language Recognition, Translation & Production" (SLRTP) Workshop brings together researchers working on different aspects of vision-based sign language research (including body posture, hands and face) and sign language linguists. The focus of this workshop is to broaden participation in sign language research from the computer vision community. We hope to identify important future research directions, and to cultivate collaborations. The workshop will consist of invited talks and also a challenge with three tracks: individual sign recognition; English sentence to sign sequence alignment; and sign spotting.
Workshop languages/accessibility: The languages of this workshop are English, British Sign Language (BSL), and International Sign (IS). Interpretation between BSL/English and IS/English will be provided, as will English subtitles, for all pre-recorded and live Q&A sessions. If you have questions about this, please contact us.
Challenge
See ECCV22_SLRTP_Challenge.pdf for challenge descriptions, instructions, terms and conditions.
Note: Participants are encouraged to request access to the dataset(s) used for the challenge as soon as possible, since it may take several days to obtain permission to download.
The challenge has three tracks. The first track is (1) sign recognition from co-articulated signing for a large number of classes – the task is to classify individual signs in continuous signing sequences given their approximate temporal extent. This should encourage discussion on how to best (i) exploit complementary signals across different modalities and articulators, (ii) model temporal information, (iii) account for long-tailed distributions.
The second track is for (2) alignment of spoken language sentences to continuous signing – the task of determining the temporal extent of a signing sequence, given its English translation. This is a key step for automatically constructing a parallel corpus for sign language translation. This should encourage discussion on how to best model video and text jointly.
The final track is (3) sign spotting : here the task is to identify whether and when a sign is performed in a given window of continuous signing. Sign spotting has a range of applications including: indexing of signing content to enable efficient search and “intelligent fast-forward” to topics of interest, automatic sign language dataset construction and “wake-word” recognition for signers.
Teams that submit their results to the challenges will also be required to submit a description of their systems. At the workshop, we will invite presentations from the challenge winners.
Dates
-
1400
           Opening Remarks
-
1410
           Challenges Discussion
-
1435
+   Invited talk by Sarah Ebling:
             Developing Sign Language Technologies for the Users: Insights from a NLP PerspectiveAbstract
In this talk, I will discuss the challenges involved in automatic sign language processing (sign language translation, recognition, and synthesis) specifically from a natural language processing (NLP) perspective. I will highlight the importance of including the end users in the research and development cycle and talk about aspects to consider when collecting and preparing data to train deep learning models. The talk ends with an exemplary presentation of different research contributions of our group.Bio
Dr. Sarah Ebling is a senior researcher at the University of Zurich, where she leads the "Language Technology for Accessibility" group, and the head of the "Accessible Communication" group at Zurich University of Applied Sciences. Her research focuses on natural language processing for persons with disabilities and special educational needs, specifically, sign language technology and automatic text simplification. She is involved in various international (EU H2020) and national (Swiss National Science Foundation Sinergia) projects and is PI to a large-scale Swiss innovation project entitled „Inclusive Information and Communication Technologies“ (2022-2026; https://www.iict.uzh.ch/).Presenters:
Sarah Ebling
-
1505
+   Invited talk by Mark Wheatley:
             Co-creation in machine translation projects: the role of deaf organisationsAbstract:
In this presentation, I reflect on the involvement of the European Union of the Deaf (EUD) in two signed-spoken language machine translation projects that are currently underway: EASIER and SignON. Both projects are funded by the EU Horizon 2020 program, and aim to create flexible mobile applications that can provide machine translation between various European signed and spoken languages. These projects have connected technology experts with sign language academics and deaf-led organisations to ensure their projects are well-fit to deaf communities. I describe the role of the EUD in the projects, how deaf community perspectives are included at different stages of project development, and the insights that have emerged from our involvement. Particularly, I discuss how user research has been critical in establishing use-cases that are acceptable to deaf communities.Bio:
Mark Wheatley has operated as the Executive Director of the European Union of the Deaf (EUD) since 2007. Under his leadership, EUD has grown to be a more visible organisation, both in terms of its external (social) media coverage and its internal member communication. He is co-author of the EUD book “Sign Language Legislation in the European Union”. He has been a member of the World Federation of the Deaf (WFD) Expert Group on Human Rights. Furthermore, he was involved as an expert in the World Health Organisation and World Bank World Report on Disability to ensure that sign language users were adequately included both in terms of terminology and accuracy of information. He also contributed to various academic publications on the topic of sign language and technological development as an enabler of deaf rights.Presenters:
Mark Wheatley
-
1530
             Comments on Sign Language Data by Bencie Woll
-
1535
             Coffee Break
-
1550
+   Invited talk by Melissa Malzkuhn:
             Signing Avatars: Fluency, Comprehension, AcceptanceAbstract
In this talk, I will discuss how my lab, Motion Light Lab, has tackled on the goal of creating fluency in signing avatars. How fluency is related to comprehension, and how that will impact the uses of avatars, and possible applications. How will it be for the users, in particular Deaf children, navigating through this? I will also discuss the overview of the field of signing avatars, especially how sign language is constructed and represented, with some challenges posed by specific technologies.Bio
Melissa Malzkuhn is an activist, academic, artist, and digital strategist with a love for language play, interactive experiences, and community-based change. Melissa believes that access to language is a human right, and that the obstacles that deprive Deaf children of the chance to learn ASL are structural, societal, and systemic. Her goal is to remove these obstacles, and she has led innovative initiatives that help Deaf children access language at many levels of the system that creates them.
She founded and leads creative research and development at Motion Light Lab, at a Gallaudet University research center. The Lab uses creative literature and digital technology techniques to create immersive learning experiences- from storybook apps that have been translated into over 20 international languages to motion-capture projects that build signing avatars- all of which expand the 3D technology landscape for deaf children, visual learners, and more.
Melissa is a co-founder of CREST Network, focusing on equity and inclusion of deaf people in sign language technology. Her production company Ink & Salt developed an app to teach American Sign Language, The ASL App, which has been downloaded over 2 million times. Third-generation Deaf, she has organized deaf youth and worked with international deaf youth programs, fostering leadership and self-representation. Now, she collaborates with teams in different countries to support literacy development for deaf children through sign language resources. Melissa led the campaign, “Hu: - To Sign Is Human”, a call for equal access to sign language as a human right, through screen-printed apparel that advocates for language access for all Deaf children.
Her work has been recognized nationally and internationally. She is an Obama Fellow, inaugural class 2018, and has been recognized as a leading social entrepreneur by Ashoka, in 2021. She resides in Maryland with her family.
Socials / Website: @mezmalz on Twitter, www.mezmalz.com, www.motionlightlab.comPresenters:
Melissa Malzkuhn
-
1620
+   Invited talk by Adam Munder:
Enabling Inclusive Communication Between Deaf and Hearing with OmniBridge AI TranslationAbstract:
I will share my journey as a Deaf person navigating my life through the hearing world and how in 2020, I built a new startup team, OmniBridge, backed by Intel. Today I lead an incredible team of engineers, annotators, linguists, and marketing experts. Our dream is to use technology for good. We have both hearing and deaf people working together toward a common goal – creating a more inclusive world with easy communication between hearing and Deaf people. How do we make these two different languages be understood in the same conversation? I will share my stories, struggles, experience, and challenges with you from my childhood up to the hearing corporate world, and the beautiful world of Deaf Culture. These experiences have inspired me to make a difference. Our solution is a sign language AI translation between Deaf and hearing people. Two languages, spoken and signed, in one easy conversation. Let’s make our world an inclusive place! Together we can truly create a new world.Bio:
Adam Munder is Deaf, and his primary communication is through sign language. He has worked at Intel in many engineering roles since 2011. Prior to Intel, he received a National Science Foundation Scholarship and a variety of internships and work experiences. He has studied applied robotics, mechanical engineering, manufacturing engineering, physics, nanoscale device, nano/micro-fabrication, and systems engineering. In the last seven years, he has shifted to the computer science field, especially deep learning and machine learning. Based on his experience of climbing the ladder in the corporate world with few designated interpreters, he is very enthusiastic about creating a barrier-free world for Deaf/HoH people to communicate with hearing people anywhere at any time inside or outside of the corporate world. Coming from an all-engineering background, he will bring a new technology focus to innovation. He is now a cofounder and General Manager of OmniBridge, an Intel Venture.Presenters:
Adam Munder
-
1650
           Closing Remarks
-
Aug 5
Challenges begin.
-
Aug 2
The tentative schedule is announced. More updates coming soon.
-
April 7
Workshop website is up! SLRTP'22 will be held as a virtual event in conjunction with ECCV'22 as part of the Sign Language Understanding Workshop. See the previous SLRTP'20 edition at www.slrtp.com.
Keynotes
Tentative Schedule
Date: Monday 24th October
Time: 14:00-18:00 GMT+3 (Israel Time), (12:00-16:00 London Time)
The workshop is fully virtual, denotes pre-recorded videos and denotes live interaction.
The access to the virtual platform will be allowed for ECCV'22 attendees who are registered with a workshop pass.
