September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Effective CNN-based Image Dehazing for UAV Deep Visual Odometry
Author Affiliations & Notes
  • Gao Yu Lee
    School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
    Air Traffic Management Research Institute, School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
  • Ken-Tye Yong
    School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
  • Hong Xu
    School of Social Sciences, Nanyang Technological University, Singapore
  • Vu N. Duong
    Air Traffic Management Research Institute, School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
  • Footnotes
    Acknowledgements  This research is supported by the Civil Aviation Authority of Singapore and Nanyang Technological University, Singapore under their collaboration in the Air Traffic Management Research Institute.
Journal of Vision September 2021, Vol.21, 2193. doi:https://doi.org/10.1167/jov.21.9.2193
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gao Yu Lee, Ken-Tye Yong, Hong Xu, Vu N. Duong; Effective CNN-based Image Dehazing for UAV Deep Visual Odometry. Journal of Vision 2021;21(9):2193. https://doi.org/10.1167/jov.21.9.2193.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

With advancement in Unmanned Aerial Vehicle systems (UAVs), UAVs can perform navigation for aerial photography and environmental inspections in areas deemed unreachable or dangerous for humans. Compared to inertial-based or GPS-based navigation, visual-based navigation via visual odometry has become a popular alternative as the former incurs drift errors while the latter GPS-signal is not available in some locations. However, feature extraction algorithms in odometry are heavily dependent on visibility of the features and it does not work well in foggy or hazy environments. To tackle this problem, we approached image dehazing algorithms to increase the visibility before implementing odometry. In this work, we introduce MLCA-DehazeVO, where dehazing is performed using a network inspired from light convolutional autoencoder (Pavan et al, 2020). The network comprises of an encoder to extract the latent representation of the hazy images, and a decoder that reconstruct the dehazed image to closely match the clear images. For odometry procedures, feature is extracted using Convolutional Neural Network (CNN) and Long-Short Term Memory is deployed for sequential feature relations analysis. We experimented our methods using the Montefiore Institute Dataset of Aerial Images and Records (MIDAIR) on 5 different trajectories that provides both clear and foggy paired synthetic sequential images with ground truth trajectories and pose data for comparison. Our approach is evaluated and shown to outperform prior-based dehazing methods (e.g. Dark Channel Prior) and performed favourably against other learning-based dehazing methods (e.g. DehazeNet) (PSNR(prior)<60.00, SSIM(prior)<0.400, PSNR(learning)>60.00, SSIM(learning)>0.400 in general, with highest PSNR(MLCA) = 67.50 and highest SSIM(MLCA) = 0.747), with the deep VO approach outperforming traditional VO methods with respect to lower relative translational error t_rel and rotational error r_rel. Our model improved the algorithmic architecture in dehazing and VO with high accuracy, thus shedding light on autonomous navigation under adverse environment and atmosphere.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×