Research project

RADAR-SIGN-BRIDGE-"Advancing British Sign Language Recognition and Translation through Non-Intrusive Radar Technology"

Project overview

The prevalence of camera technology has undoubtedly reshaped our lives, offering enhanced communication, security, and innovative possibilities. However, this widespread integration has raised serious concerns regarding personal privacy and security. Many individuals are under constant surveillance, leading to altered behaviour and emotional distress. This concern extends to sign language users relying heavily on camera-based Sign Language Recognition Technology (SLRT). These users encounter the same privacy issues, which can lead to self-censorship, fear of judgment, and emotional strain. Moreover, the increasing proliferation of smart technology, particularly those designed for spoken language recognition and communication, excludes sign language users from accessing and utilizing these tools, further deepening the digital divide. Addressing these pressing concerns and provide sign language users with the freedom to communicate without compromising their privacy requires a technology that adapts to sign languages while safeguarding personal privacy. This project seeks to bridge this gap by introducing radar technology as an innovative alternative, primarily focusing on British Sign Language (BSL) users. Radar technology offers a distinct advantage ? it can recognise sign language gestures without capturing visual images, ensuring signers' privacy. However, the key challenge lies in the limited research on radar-based sign language recognition and the absence of publicly available radar databases essential for developing and testing algorithms. This project addresses these challenges through a well-thought-out approach. Central to this effort is the development of LinguaRadar, an advanced Sign Language Radar Simulator specifically designed to capture the intricate nuances of BSL (Objective 1). LinguaRadar goes beyond the basics, incorporating the complexities of facial expressions, detailed finger movements, and the subtleties of body posture. It achieves this by extracting animation data directly from monochrome videos, presenting a game-changing opportunity to use publicly available video databases to generate substantial data. Leveraging this unique capability, the project also focuses on creating a comprehensive radar dataset (Objective 2). This dataset will form the foundation for refining and validating our Language and Learning Models (LLMs), which are crucial for recognising patterns and correlations within radar data and facilitating accurate BSL sign recognition (Objective 3). Ultimately, the project aims to develop a hardware prototype (Objective 4) to translate BSL radar data into spoken commands, enabling a signing interface for widely used virtual assistants like Alexa. This project holds substantial promise with far-reaching impacts. Firstly, developing a simulator capable of utilising existing videos saves data collection efforts and aligns with sustainability goals by reducing the digital carbon footprint. This contribution aligns with the UK's mission of achieving NetZero by 2050. Secondly, creating extensive radar synthetic datasets addresses a significant data shortage issue within the radar community. Most notably, the potential societal benefits extend beyond British Sign Language (BSL) to encompass other sign and non-sign language gestures, broadening the reach and impact of this research. This project transcends academic boundaries, actively seeking to translate its findings into practical applications through workshops and industry demonstrations. This approach ensures that our research delivers tangible solutions, benefiting BSL users and a wider audience. Principal Investigator's unique expertise in radar technology and machine learning, combined with the diverse and skilled research team, including linguistic experts at the Centre for Speech Technology Research (CSTR) at the University of Edinburgh and the Deafness Cognition and Language Research Centre (DCAL) at University College London, positions us ideally to execute this interdisciplinary project successfully.

Staff

Lead researchers

Dr Shelly Vishwakarma PhD

Lecturer
Research interests
  • Designing and developing hardware and software frameworks for contextual sensing applications
  • Concurrent physical activity recognition and indoor localization
Connect with Shelly

Research outputs