← Back to papers

AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove

★ ★ ★ ★ ☆

Paper Summary

Paperzilla title
Talking Gloves: AI-Powered Sign Language Recognition Meets VR for a Chat in Cyberspace!

Researchers developed smart gloves with triboelectric sensors to capture sign language gestures, which are then processed by a deep learning model for recognition. The system achieves high accuracy in recognizing 50 words and 20 sentences from American Sign Language and translates them into text and speech within a virtual reality interface, allowing for interactive communication between signers and non-signers.

Explain Like I'm Five

Scientists made special gloves that understand sign language when you wear them. These gloves help people who sign talk to others in a pretend computer world, even if they don't know sign language.

Possible Conflicts of Interest

The authors declare competing interests related to a patent application covering sign language recognition and communication systems.

Identified Limitations

Limited Dataset
The dataset used for training and validation is limited to 50 words and 20 sentences, which might not be representative of the complexity and diversity of real-world sign language communication.
Inability to Recognize New Sentences (Non-segmentation Model)
While achieving high accuracy on the existing dataset, the non-segmentation model struggles to recognize new sentences created by recombining known word elements in different orders. This limits the practical applicability of the system in real-world scenarios.
Time Latency (Segmentation Model)
Although the segmentation model addresses the limitation of recognizing new sentences, it introduces additional time latency due to the sequential processing of word fragments. This latency could hinder real-time sign language translation.
Limited Generalizability
The study focuses solely on American Sign Language, and the findings may not be generalizable to other sign languages with different vocabularies and grammatical structures.

Rating Explanation

This paper presents a novel approach to sign language recognition using AI and VR, demonstrating promising results in recognizing both words and sentences. The integration of VR enhances user interaction and creates a more immersive communication experience. While the limited dataset and language specificity are notable limitations, the innovative methodology and potential for real-time translation warrant a strong rating.

Good to know

This is the Starter analysis. Paperzilla Pro fact-checks every citation, researches author backgrounds and funding sources, and uses advanced AI reasoning for more thorough insights.

Explore Pro →

Topic Hierarchy

File Information

Original Title: AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove
Uploaded: July 14, 2025 at 11:23 AM
Privacy: Public