The advance of technology has brought various improvements over the years, in which human entertainment and the commercial field have been the main benefited items. However, this sector is also updated to support those who need it mostfor example, those who cannot speak or are not able to communicate in a common way
There are several challenges faced by these individuals, who sometimes need a component or additional product to better express their sign language. Therefore, in an attempt to facilitate this aspect for this group, the young engineering student Priyanjali Gupta has created in the India an artificial intelligence model that can understand these gestures in real time.
This creation was made together with the Vellore Institute of Technology, where a Tensorflow object detection system was used to apply it to computers. Unlike other users, Gupta publicly shared his creation on his Facebook account. LinkedInwhere he demonstrated the capabilities of his AI model in a demo video. You can see the clip below:
YOU CAN SEE: How to use your Android or iPhone cell phone to find hidden cameras in a room?
“The dataset is created manually by running the file ImageCollectionPython, that collects images from his webcam for all the signs mentioned below in American Sign Language: Hello, I love you, thank you, please, yes and no, ”he explains in his professional network posts and videos.
To do this, Priyanjali, 20, said she used the Alexa device as a test case, which takes commands when a user speaks.
A third year #engineering student Priyanjali Gupta has developed an Artificial Intelligence (AI) model that is able to translate #signlanguage into #english in real-time. She is a third year computer science student at Vellore Institute of #Technology. pic.twitter.com/sTjwD7Hk12
— The Logical Indian (@LogicalIndians) February 18, 2022