Publication Type : Conference Proceedings
Publisher : Springer Nature Switzerland
Source : IFIP Advances in Information and Communication Technology
Url : https://doi.org/10.1007/978-3-031-98360-3_4
Campus : Bengaluru
School : School of Computing
Year : 2025
Abstract : To revolutionize agriculture and warehouse based autonomous driving, this vision-based system offers a low-cost, adaptable, and scalable solution by leveraging advanced computer vision and robotics techniques. The updated project emphasizes precise object detection using YOLO and a combination of YOLO with MiDaS for depth-aware identification. For row crops, OpenCV based color segmentation is utilized to highlight crop regions, with the largest blob in the perspective identified to determine prominent rows. Stereo vision plays a crucial role in camera calibration, depth map generation, and accurate human detection with distance estimation. This integrated approach addresses the limitations of traditional LiDAR sensors, which are often hindered by high costs and poor performance in dusty or dense vegetation conditions. By combining real-time environmental modeling with adaptive navigation, the system enhances operational efficiency and effectiveness in complex agricultural scenarios, positioning itself as a reliable tool for modern farming practices.
Cite this Research Publication : V. Gopi Kiran, T. Hemanth Babu, N. A. Vidula, Tripty Singh, A Low-Cost Vision-Based Framework for Autonomous Driving Using YOLO, MiDaS, and Stereo Vision, IFIP Advances in Information and Communication Technology, Springer Nature Switzerland, 2025, https://doi.org/10.1007/978-3-031-98360-3_4