Publication Type : Conference Paper
Publisher : Springer Singapore
Source : Advances in Intelligent Systems and Computing
Url : https://doi.org/10.1007/978-981-15-5400-1_13
Campus : Coimbatore
School : School of Engineering
Center : Center for Computational Engineering and Networking
Department : Electronics and Communication
Year : 2021
Abstract : Pedestrian detection systems are of paramount importance in today’s cars, as we depart from conventional ADAS toward higher levels of integrated autonomous driving capability. Legacy systems use a variety of image processing methodologies to detect objects and people in front of the vehicle. However, more robust solutions have been proposed in recent years with the advent of better sensors (lidars, stereo cameras and radars) along with new deep learning algorithms. This paper details a ROS-based pedestrian detection and localization algorithm utilizing ZED stereo vision camera and Leddar M16, employing darknet YOLOv2 for localization, to yield faster and credible results in object detection. Distance data is obtained using a stereo camera point cloud and the Leddar M16, which is later fused using ANFIS. YOLOv2 was trained using the Caltech Pedestrian dataset on a NVIDIA 940MX GPU, with the sensor interfacing done via ROS.
Cite this Research Publication : Anjali Mukherjee, S. Adarsh, K. I. Ramachandran, ROS-Based Pedestrian Detection and Distance Estimation Algorithm Using Stereo Vision, Leddar and CNN, Advances in Intelligent Systems and Computing, Springer Singapore, 2020, https://doi.org/10.1007/978-981-15-5400-1_13