Navigation and Object Detection for Blind Persons Based on Neural Network

Tools for blind people with mobility activities in pedestrian pathways have been widely launched, approved and patented. However, there are still shortcomings that can be done only for pedestrian paths or nearby destinations. In this study, both a camera (detection of the pedestrian path) and LiDAR (detection of surrounding objects) sensors to help disability activities. The first stage of image data from the preparatory camera from RGB to XYZ, color filters, close morphology, resizing, learning and testing of neural networks. Bring up 3 voice attitudes information. Attitudes are perpendicular, left tilted, right tilted, or not reversed to the pedestrian yellow path. The second stage of the LiDAR distance points data is processed into 2D array geometry, learning, and testing of neural networks. Bring up the information 8 voice attitudes. Detection of the cycle and distance of objects right side, front, left, right-front, right-left, front-left, right-front-left, not captured. The test results approximately at lux <15000 got 89.7% accuracy for pedestrian path detection and 87.5% for object detection.


INTRODUCTION
Products, patents, and research that discusses discussing a cane (white can) and the like to help blind persons with disabilities along the pedestrian lane have been carried out.Either a combination of assistive devices with sensors or using a pure camera.The sensors used are ultrasonic, infrared (IR), Accelero & PIT, RFID, 3D TOF to detect surrounding objects.Ultrasonic function is to detect objects of inanimate objects around them with a range of a distance of 2 meters.Infra-red functions to detect objects of living objects around them.Accelero & PIT serves to detect the direction of motion and in the event of rotation, calculated from the time of change in the X and Y axis values.All data information issued in the form of different tones to the buzzer (Al Kandari et al., 2016;Sukhdeep et al., 2016, Ganguli et al., 2016).Other studies add the GSM-GPS Modem function to inform the blind position of blind persons automatically or manually through a button if an incident occurs (Dhananjeyan et al., 2016).
Other recognition techniques use cameras to work along the pedestrian lane.Detect pedestrian lines that are marked with two white lines at a traffic junction.Uses extra ROIverification of lanes and lanes.Color and intensity information to detect lane markers, and then verify probabilistic work candidate areas.The geometric feature of the marker lane is used for verification.Pedestrian lane detection in an unstructured environment.Integrate both the appearance of the region and the characteristics of the border lane (Le et al., 2012;Mocanu et al., 2018).The vanishing point is used to detect lane boundaries based on the color edge orientation, and to use pedestrian detection for occlusion handling.The pedestrian lane detection method is evaluated on a new data set of 2000.And the object recognition technology uses a camera with the Convolutional Neural Network method (Le et al., 2014).
This research proposes a combination of combining two sensors working together and complementing each other, as well as a new algorithm converting juring arrays to 2D arrays.When the camera sensor and LiDAR produce good data, the 3 attitudes of sound information (object navigation from the camera, as well as detection, object distance, and spatial geometry from LiDAR) will be generated.When the data from the camera sensor is not good, then the attitude of the sound information is generated from LiDAR only and vice versa.

RESEARCH METHOD
The data processing stages proposed are of two types, namely navigation and detection of surrounding objects on the pedestrian lane.The navigation stage of the data input is an RGB image captured by the camera.The object detection stage of data input is a 2D array by LiDAR catches.The following is an explanation of the hardware block diagram, shown in Figure 1.

Navigation Process from Camera
The navigation steps of pedestrian lane detection are processed on the mini PC, the following sequence is as follows, shown in Figure 2. The navigation steps of the pedestrian lane detection are processed a) Input the RGB standard RGB color image array captured by the camera is converted to the XYZ color system.Meanwhile, the conversion of RGB to XYZ colors is shown in equation 1.
a) The navigation steps of the pedestrian lane detection are processed a) Input the RGB standard RGB color image array captured by the camera is converted to the XYZ color system.Meanwhile, the conversion of RGB to XYZ colors is shown in equation 1. (1) b) The color filter can isolate the pedestrian lane from other objects, so it is expected that from the drawing the lane is left only; Meanwhile, the screening process made specifically for this case, is shown by the press equation 2.
(2) c) Close morphology to highlight the pedestrian lane as much as possible.The chosen process is morphology 'close', where s is a structured element.Meanwhile, the results are shown by the press equation 3.
Resize image to the 500x500 pixel neural network architecture input layer.The core changes size by assuming colors between two pixels that are known to be linear.In this model, the color of a pixel is determined by the four pixels that surround it.Resize image adjust to the input layer of neural network architecture.The core changes size by assuming colors between two pixels that are known to be linear.In this model, the color of a pixel is determined by the four pixels that surround it.Points p1, p2, p3, and p4 are the pixels whose colors are known while p is the pixels to be searched for by the bilinear interpolation approach.e) Do a study of sample data using the help of Matlab to obtain new weights and biases values.
System architecture uses the backpropagation method.Consists of the input layer (500 x 500 neurons), a hidden layer of 100 neurons, and output layer 2 neurons.From the learning results generated 500 x 500 new weights and 1 new bias.f) Do a test using new weights and bias in learning outcomes.Testing is done directly from the camera on a mini PC device that has been planted with the results of the research algorithm.

Object Detection from LiDAR
The object detection steps of pedestrian lane detection are processed on the ARM Microcontroller, the following sequence is as follows, shown in Figure 4.  5. Smax-measurement distance results, if the result is > 0, then fill in the pixel distance from Smin to the measurement distance result with a value of 0 and from the measurement distance result to the Smax value of 1. c) Learn from the sample data using the help of Matlab to obtain new weights and biases values.System architecture uses the backpropagation method.Consists of the input layer (196 x 196 neurons), hidden layer 20 neurons, and output layer 3 neurons.From the learning results produced 196 x 196 new weights and 1 new bias values.d) Do a test using new weights and bias in learning outcomes.Testing is done directly from LiDAR.e) The array data is searched for the minimum distance value using the algorithm and.

Voice Information Attitude
Information stages 3 voice attitudes are processed on the mini PC for the input camera sensor.And Information stages 8 voice attitudes are processed on the mini PC for input LiDAR sensor.Each numerical value of attitude output in sequence is compared one by one with a sound database on the mini PC.Then transmitted via the Bluetooth module to the headset.

Camera Testing
Learning data consists of 400 sample images.Sample images were taken at 4 different locations, namely PSBN Wiyata Guna Bandung, Cicendo, Asia Afrika, and Tamansari Street, shown in Table 3. the test is carried out in the same place and condition as the learning sample.testing is done 400 times.100 tests each at each location, shown in Table 4.

LiDAR Testing
Data sample, place and condition the same as camera testing.however, put an object in front of it, shown in Tables 5 and 6.The object detection test on the pedestrian path is done in the same place as the camera test.Data collection and testing is carried out during sunny weather with trash bin objects, sample images from the results of data processing are shown in Figure 8 and 9.

CONCLUSION
The use of camera sensors is interrupted due to the effect of light intensity and vibration.While the LiDAR sensor is disturbed due to the influence of the object's reflection material and the angular bias width.Based on the results of testing and analysis of the use of neural network algorithms Attitude detection of object position, and the closest distance.The measurement test results at lux <15000 got 89.7% success accuracy for pedestrian path detection and 87.5 for object detection.

Figure
Figure 3. Resize Image

Figure
Figure 6.Prototype Yellow line navigation testing on pedestrian was carried out directly in several points of the city of Bandung.Data collection and testing is carried out during sunny weather, a sample image of data processing results show in Figure7.

Figure 7 .
Figure 7. Sample Navigation Using Camera

Figure 8 .
Figure 8. Sample Object Detection, the closest right side distance

Table 1 .
Camera Learning Pattern

Table 2 .
LiDAR Learning Pattern

Table 3 .
Data Sample Learning camera

Table 5 .
Data Sample Learning LiDAR