This project presents an approach to detect sign language to establish a means of communication with mute and blind people. Sign language is a method which helps mute and blind people to overcome the communication barrier. The proposed system uses a Long Short-Term Memory (LSTM) model with Mediapipe holistics which detects landmarks on hand for detection of hand gestures which helps in recognising and deciphering British sign language (BSL) in Real Time. In BSL both hands are used for gesturing. The dataset is created manually by me and my friends for adding varieties in the dataset. This resulted in a responsive and accurate real-time detection of BSL. The proposed method is simple with limited calculations and works well with a smaller dataset. The real-time detection of the proposed method has an accuracy of 89% and displays the result in the form of different coloured bar graphs. The proposed work presents a BSL detection system, which detects British Sign language and displays output in text. The model successfully detects five different Signs i.e. “Hello”, “Thank You”, “Sorry”, “Help”, and “Home”.Here the camera captures the frames that act as input to the system and the output is displayed as text. The base of the proposed model is detecting the key points accurately and in a shorter period, for the same purpose, the MediaPipe framework is used in this model. MediaPipe is an open-source framework developed by Google which is widely used for media processing. Hand Landmark detection is the necessary criterion in real-time detection, MediaPipe framework provides predefined 21 landmarks on each hand. The model uses the dataset which the authors have created. For the current working model, the machine can differentiate between five British sign languages that are “Hello”, “Thank You”, “Sorry”, “Help” & “Home”. The current database consists of 4500 frames consisting of inputs from all the authors to nullify the case of overfitting due to lack of variation in data. The dataset is split into training and testing where 5% of the data is for testing and the rest is used as training. The model uses (Sequential model from TensorFlow) Neural Networks with “relu” activation function. The number of layers of LSTM and Dense was finalized based on the difference between the categorical loss and validation loss of the model. Considering the loss difference, the model uses one LSTM layer with 32 units and one dense neural network layer which is trained on 50 epochs and an accuracy of 89% is observed. The final section is real-time detection, the model uses the device’s camera as the source of input, the cv2 library plays a major role in accepting the data as a constant video feed frame by frame and MediaPipe holistic is used along with it to identify the key points in the input frames to provide it to the model for real-time detection of British sign language and at last the model is tested by me and my friends to ensure that the system can work on different people. The system can be enabled with hardware technology that can provide the desired solution needed like a voice with text representation, So if a vocal-impaired person shows a sign then the system will represent the voice with text.
-
Notifications
You must be signed in to change notification settings - Fork 1
First file is PPT, Second is Source Code, Third one is Output and one last please don't forget to give a star
adityasinghz/Artificial-Intelligence-based-Real-Time-Deciphering-of-British-Sign-Language
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
First file is PPT, Second is Source Code, Third one is Output and one last please don't forget to give a star
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published