PAPERZILLA
Crunching Academic Papers into Bite-sized Insights.
About
Sign Out
← Back to papers

Physical SciencesComputer ScienceHuman-Computer Interaction

AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove
SHARE
Overview
Paper Summary
Conflicts of Interest
Identified Weaknesses
Rating Explanation
Good to know
Topic Hierarchy
File Information
Paper Summary
Paperzilla title
Talking Gloves: AI-Powered Sign Language Recognition Meets VR for a Chat in Cyberspace!
Researchers developed smart gloves with triboelectric sensors to capture sign language gestures, which are then processed by a deep learning model for recognition. The system achieves high accuracy in recognizing 50 words and 20 sentences from American Sign Language and translates them into text and speech within a virtual reality interface, allowing for interactive communication between signers and non-signers.
Possible Conflicts of Interest
The authors declare competing interests related to a patent application covering sign language recognition and communication systems.
Identified Weaknesses
Limited Dataset
The dataset used for training and validation is limited to 50 words and 20 sentences, which might not be representative of the complexity and diversity of real-world sign language communication.
Inability to Recognize New Sentences (Non-segmentation Model)
While achieving high accuracy on the existing dataset, the non-segmentation model struggles to recognize new sentences created by recombining known word elements in different orders. This limits the practical applicability of the system in real-world scenarios.
Time Latency (Segmentation Model)
Although the segmentation model addresses the limitation of recognizing new sentences, it introduces additional time latency due to the sequential processing of word fragments. This latency could hinder real-time sign language translation.
Limited Generalizability
The study focuses solely on American Sign Language, and the findings may not be generalizable to other sign languages with different vocabularies and grammatical structures.
Rating Explanation
This paper presents a novel approach to sign language recognition using AI and VR, demonstrating promising results in recognizing both words and sentences. The integration of VR enhances user interaction and creates a more immersive communication experience. While the limited dataset and language specificity are notable limitations, the innovative methodology and potential for real-time translation warrant a strong rating.
Good to know
This is our free standard analysis. Paperzilla Pro fact-checks every citation, researches author backgrounds and funding sources, and uses advanced AI reasoning for more thorough insights.
Explore Pro →
Topic Hierarchy
File Information
Original Title:
AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove
File Name:
s41467-021-25637-w.pdf
[download]
File Size:
10.05 MB
Uploaded:
July 14, 2025 at 11:23 AM
Privacy:
🌐 Public
© 2025 Paperzilla. All rights reserved.

If you are not redirected automatically, click here.