Sensors are vital for autonomous driving, but they are susceptible to weather noise. To address the performance degradation of point cloud object detection in foggy conditions, this paper aims to propose an efficient dynamic pillar-based point cloud detection network tailored for foggy traffic scenarios.
The point cloud data are first pillarized, and then the model adaptively adjusts the feature extraction process through DynamicPillarVFE to significantly enhance the processing capability for sparse and irregular point cloud data. Second, a bird's eye view (BEV) backbone network based on residual connections is used to effectively mitigate the gradient vanishing problem in deep networks. In addition, this paper introduces Triplet Attention (TA) in the feature enhancement part, which can focus on important features more comprehensively and suppress noise interference.
In this paper, we mixed a small portion of the Multifog KITTI dataset on top of the KITTI dataset. Experimental results demonstrate that under moderate difficulty 3D evaluation metrics, our method achieves accuracy improvements of 5.03%, 6.66% and 7.78% for cars, pedestrians and cyclists, respectively, compared to the PointPillars baseline, significantly enhancing point cloud object detection performance in foggy conditions. Furthermore, the DPSF-Net architecture achieves an inference speed of 32.36 ms per frame, fully meeting the real-time processing requirements for autonomous driving applications.
This method is more economical to simulate the influence of sensors under foggy conditions, and through the training of mixed data sets, the network model can better cope with foggy interference.
