Feature and Box Propagation for Video Vehicle Detection
编号:1408
访问权限:仅限参会人
更新:2021-12-03 10:49:44 浏览:87次
张贴报告
摘要
Video vehicle detection is more valuable and challenging than image vehicle detection for an intelligent transportation system. Due to the existing situation of vehicle blurring, occlusion and scale changing in traffic monitoring, using static vehicle detection network often leads to the decrease of detection accuracy. Aiming at this problem, this paper proposes a video vehicle detection method based feature and box propagation. The proposed method incorporates video-specific spatio-temporal information in the static object detection framework on both feature level and box level, resulting in higher performance and lower computational complexity. Firstly, we define some key frames, and use the CNN-based object detection network to extract feature maps and obtain vehicle detection results in these frames. Secondly, we produce two categories of candidate detection results of non-key frames by the way of propagating feature maps and detection boxes from key frames to non-key frames. Feature propagation is implemented through the flow field, and then the first category of candidate detection results of non-key frames are obtained based on these propagated features. Besides, detection boxes of key frames are propagated to the non-key frames by tracking to generate the second category of candidate detection results of non-key frames. Finally, detection results of non-key frames obtained by the two ways are fused to obtain the final results. Our approach significantly improves the temporal consistency of the detection results. We evaluate the proposed vehicle detection network on two dataset. Experimental results show that the proposed method has a better performance than the static detector, and is also comparable with the state-of-the-arts.
稿件作者
Yanni Yang
Chang'an University
发表评论