Abstract
With advancement in Unmanned Aerial Vehicle systems (UAVs), UAVs can perform navigation for aerial photography and environmental inspections in areas deemed unreachable or dangerous for humans. Compared to inertial-based or GPS-based navigation, visual-based navigation via visual odometry has become a popular alternative as the former incurs drift errors while the latter GPS-signal is not available in some locations. However, feature extraction algorithms in odometry are heavily dependent on visibility of the features and it does not work well in foggy or hazy environments. To tackle this problem, we approached image dehazing algorithms to increase the visibility before implementing odometry. In this work, we introduce MLCA-DehazeVO, where dehazing is performed using a network inspired from light convolutional autoencoder (Pavan et al, 2020). The network comprises of an encoder to extract the latent representation of the hazy images, and a decoder that reconstruct the dehazed image to closely match the clear images. For odometry procedures, feature is extracted using Convolutional Neural Network (CNN) and Long-Short Term Memory is deployed for sequential feature relations analysis. We experimented our methods using the Montefiore Institute Dataset of Aerial Images and Records (MIDAIR) on 5 different trajectories that provides both clear and foggy paired synthetic sequential images with ground truth trajectories and pose data for comparison. Our approach is evaluated and shown to outperform prior-based dehazing methods (e.g. Dark Channel Prior) and performed favourably against other learning-based dehazing methods (e.g. DehazeNet) (PSNR(prior)<60.00, SSIM(prior)<0.400, PSNR(learning)>60.00, SSIM(learning)>0.400 in general, with highest PSNR(MLCA) = 67.50 and highest SSIM(MLCA) = 0.747), with the deep VO approach outperforming traditional VO methods with respect to lower relative translational error t_rel and rotational error r_rel. Our model improved the algorithmic architecture in dehazing and VO with high accuracy, thus shedding light on autonomous navigation under adverse environment and atmosphere.