Visually impeded or blind people consistently rely upon others for their everyday tasks. Hence, traveling around from one place to another without some form of assistance
The number of visually impaired and blind in 2014 were which has grown by 30% every year and have reached a staggering 2.2 billion towards the end of year 2019
In Pakistan, 2.5% of total population are suffering from blindness and visual Impairment
These literature reviews are taken from different projects that are working on advancing the technology to design a good, reliable and efficient system for blind or visually impaired people to detect the obstacles and alert them. There are some systems which have some limitations and impediments.
In
Furthermore, in the paper
The paper
Mocanu et al. designed a device for highly dynamic urban areas. System based on computer vision algorithms. The device uses several image processing techniques in real time by processing samples from camera and the output of this system is transformed into a set of acoustic feedback. The whole computing system is placed inside a back pack
Kiaser and Lawo et al. developed a wearable navigation system consisting of indoor navigation system, indoor open source navigation, outdoor navigation and route planning. For indoor use, it uses simultaneous localization and mapping (SLAM) which constructs, and updates maps in an unknown environment. For outdoor use, it uses WLAN, GPS and BLUETOOTH electronic travel aid (ETA)
Marzec and Kos et al. created a prototype with the aim to enable blind to navigate in urban areas and build an energy saving nav-system. The navigation system uses infrared sensors mounted on user’s arms providing haptic feedback. Distance is measured with thermographic camera and allows measuring distance between blind people and objects and road. Accuracy is about 20cm
Vithiya et al. proposed a combination of Raspberry Pi and Arduino based system interfaced with numerous analog sensors such as temperature, heart rate and gas thereby detecting the temperature, pulse and nearby harmful gases of the surrounding and current location of soldiers in case of battle through GPS
The numerous issues encountered in the existing systems include demanding continuous internet connectivity, or requiring the use of the smart white canes which becomes a hassle for the blind to use especially in crowded areas. Some devices proposed in above mentioned survey cannot detect objects in multiple directions and angles. Some systems cannot be carried easily and need some training. All the devices help in eradicating only a few issues faced by the blind so our proposed system aims to eradicate multiple problems by developing a prototype that can be implemented in the form of wearable.
Hence, our objective to create a system that could aid the blind in most, if not all activities of their daily life. As discussed above, the visually impaired need to be particularly careful when treading in urban environments so a device with obstacle detection coupled with haptic feedback, GPS and GSM system would prove to be very handy in such situations. It also becomes a challenge for the blind to recognize people based on analyzing their voices alone, since the blind would be unable to differentiate between people having the same voice or those people with whom they have not been in contact with, in a long time. Thus, a system which can identify close relatives or associates becomes essential. A solution to above problem is by using face recognition which is a branch of image processing. So it becomes a great challenge to construct such a system with the proposed functionalities and integrate them all and aid the visually impaired people in their daily endeavors.
The proposed system is divided mainly into two subsystems in which Raspberry Pi 4 is used for all the heavy computing tasks required in image processing, GSM, GPS and some analog sensors. And the Arduino Pro Mini is used with interfacing the ultrasonic sensors, buzzers and also vibration motors for providing haptic feedback. These sensors along with Arduino are designed in 5 or more compact miniaturized modules created in the form of wearables and attached to several body parts of the user through an elastic band. The whole system is divided into 3 steps
Arduino pro mini is carried out in the wearable forms such as a cloth or jacket form. This system is equipped with five ultrasonic sensor modules which are attached to different body parts and allows detecting obstacles in five directions, two for both the shoulders, two for both the knees and one on hand. The main advantage is that these units can provide portability, so the user feels comfortable and is free to move anywhere. Any object lying in the user’s path ahead is detected by the ultrasonic sensor modules and the sound interval of the buzzer allows the user to determine whether the object is near or far. Furthermore, there is a toggle switch which gives the user some functionality whether they want the buzzing sound or haptic feedback. The sound interval of the buzzer is also synced with the LED
The camera is used for real time video feed to the raspberry pi where the data is analyzed using Open Source Computer Vision (OpenCV) techniques. Several packages and dependencies necessary for face recognition such as BusterOS, dlib, OpenCV python etc. are installed on to the raspberry pi. The deep learning and face recognition rely on a technique called deep metric learning which outputs a 128-d real-valued feature vector. The process of face recognition starts by initially detecting faces, then computing 128-d face embedding to quantify a face, then train the Support Vector Machine (SVM) on top of the embedding and finally recognize faces in images and video streams. The Raspberry Pi SD Card would already have stored faces of the blind’s relatives and close ones and upon interaction the algorithm will detect and recognize the face, compare it with stored database, and come up with a name if the accuracy level exceeds 90% . To make it more versatile a webcam can also be used by installing webcam package and changing a single line of code
To ensure proper tracing and exact location of the user, the Global Positioning System (GPS) needs to be interfaced with raspberry pi so that the relatives can get precise location of the user when necessary especially in emergency situations. The values of longitude and latitude are obtained from GPS and are forwarded to the Raspberry Pi where they are stored in a database.
The Global System for Mobile Communication (GSM) is interfaced with raspberry pi whose primary function includes relaying the location of the user through Short Message Service (SMS) to the relatives, this is particularly useful in case of a mishap with the blind.
A water sensor is also connected with Raspberry Pi to detect water level surrounding the blind and give warning in the form of audio speech through headphones.
Furthermore several other analog sensors such as gas, temperature and heart rate sensors could also be integrated with the system to provide with much more information to the relatives of the blind
The presented system is designed and built for helping the blind and visually impaired people in their daily errands. This device can handle several states that the visually impaired people can face. The device alleviates most, if not all problems, faced by the blind under various circumstances. Through the use of powerful components such as Raspberry Pi and Arduino, the main objective is building a smart assistant which will help the visually impaired people to move around from one place to another with the speed and confidence by realizing the nearby obstacles, recognize relatives and also relay location of the user to the relatives through GPS and GSM services.
There are five Arduino modules all of which are interfaced with separate ultrasonic sensors and buzzers along with toggle switches to alternate between modes of either haptic feedback or sound only. The ultrasonic sensor module HC-SR04 is used which basically acts like a sonar. It delivers excellent non-contact range detection of 2cm to 400cm coupled with high accuracy of up to 3mm and precise readings. The distance from the object is determined when the transmitter sends a high frequency sound and the sound reflects back after hitting an object and is then caught by the receiver. The simple formula
The components attached to Raspberry Pi are enabled by issuing several commands. Once all the attached devices start to function, the GPS collects longitude and latitude data and stores the values in the preset database file. The GSM module sends notification such as location and other parameter to the user’s relatives through SMS service after a fixed interval.
For face recognition in raspberry pi, we have created several directories, first of all the dataset/ directory which comprises of sub directories containing a number of facial images of each person the user would like to recognize such as a close relative. The file encode_faces.py will find faces in our dataset and encode them into 128-d vectors. The encodings.pickle file will store the face encodings (128-d vectors, one for each face). In order to detect and localize faces in frames we rely on OpenCV’s pre-trained Haar cascade file i.e. haarcascade_frontalface_default.xml. The pi_face_recognition.py is the main execution script. After a face is detected in the frame, the algorithm will compare it to the images stored in the dataset and if the confidence level is high enough, it will come up with the name of that specific person. The text to speech algorithm will generate an audio signal that will be forwarded to the user through headphones.
On a side note we observed that the raspberry pi was incapable of running the CNN detection method, after a bit of tinkering we found out that CNN detection method required a powerful computer with a capable GPU, so we opted to use the HOG face detection method instead, it requires less resources and computing power.
Moreover, the system has water sensor interfaced to raspberry pi so when the sensor detects moisture, it sends a signal to the raspberry pi, where it is then converted in to an audio generated signal giving a warning message through headphones.
The proposed system assimilates the working of the various modules, providing a multipurpose device for the blind and visually impaired. The device is designed in such a way that it is comfortable and can be kept in pockets i.e. portable. The device consistently monitors the current location of the user with the help of GPS. The device also gives warning in the form of sound and haptic feedback when obstacles are detected in the way. It also helps identify people based on previously stored images. The system’s portability and ease of use makes it a great alternative to the cane/stick. The acoustic output is given as voice commands through bone conduction headphones. Since all the data is stored and executed within raspberry pi, so no Internet connectivity is required for its working, because internet is not available throughout the wearer’s path, so it is an added benefit. In addition, the device also does not use any Android or other touch screen related technology making it very simple and easy to use. Some further improvements could pave the way for an even more versatile device such as integrating it with Google Maps for better performance of GPS. More sensors can be added to detect high temperature, gas etc. High grade sensors can be used to improve accuracy and response times. The circuit board can be designed in a much smaller form, factor it to make it light and more comfortable to wear. NVidia Jetson Developer Kit can be used to yield even faster and more reliable results.