The past decades have seen a proliferation of drone (UAV) technologies across various applications. In these applications, niche methodologies have been developed to make drone acquired imagery fit for that use-case. Because these methodologies are developed only with the use-case in mind, it is hard to translate the findings to other areas. This increases the barrier to entry of UAV usage and the insights they provide, limiting adoption to those who require the benefits most. Lowering the barrier to entry for UAV image analysis requires the analytical models to be made more generalizable. The first limiting factor in accessing analytical models, is the requirement for high quality ground-truth information. A standardized process which reduces the required resources for ground truthing will be the first outcome of this study. A second limiting factor, is that not the models, but the lack of diversity of the input datasets are the limiting factor in generalization. Recent developments in image-to-image translation can be used to generate diverse imagery, by taking an image and applying new styles such as weather conditions or changing the subject. This development will be applied in this study to fill the gap for transfer learning methods in UAV imagery. Finally, to achieve generalization across multiple sensing modalities, the proposed methods are applied to multispectral imagery, which has additional spectral information in the near-infrared and infrared wavelengths. Achieving this generalization is achieved through generative networks, which have proven successful in the past in converting RGB to multispectral data.
|Effective start/end date||1/08/22 → …|
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.