Exploring Augmented Visual Guidance For Sign Language Learning
Lead Researcher(s): Liam Iñaki B. Gillamac, Roberto B. Figueroa
Status: Published
Abstract/summary: In motion learning, learners often observe and compare their performance with experts, sometimes using tools like mirrors. This switching of views, however, adds mental effort. For teachers, guiding multiple learners can be challenging. Sign language bridges communication between deaf and hearing individuals. Despite its importance, Filipino Sign Language (FSL) remains underutilized due to limited public awareness and insufficient educational resources. This paper describes the development of an augmented reality (AR) application that provides interactive visual guidance in the context of learning FSL. To assess the system’s responsiveness, camera feedback latency was measured across different devices. Results show that while performance varies depending on device capability, the system delivers the best immersive learning experience on laptops and newer smartphones, while still showing promise for use on older and less powerful devices. The findings contribute to broader accessibility initiatives in the Philippines and inform the development of more effective technological solutions for motion learning in general.
Keywords:
- Augmented Reality
- Visual Cues
- Motion Guidance
- Motion Learning
- Sign Language
- Sign