Author:
Khoshboresh-Masouleh Mehdi,Shah-Hosseini Reza
Abstract
This study focuses on tackling the challenge of building mapping in multi-modal remote sensing data by proposing a novel, deep superpixel-wise convolutional neural network called DeepQuantized-Net, plus a new red, green, blue (RGB)-depth data set named IND. DeepQuantized-Net
incorporated two practical ideas in segmentation: first, improving the object pattern with the exploitation of superpixels instead of pixels, as the imaging unit in DeepQuantized-Net. Second, the reduction of computational cost. The generated data set includes 294 RGB-depth images (256
training images and 38 test images) from different locations in the state of Indiana in the U.S., with 1024 × 1024 pixels and a spatial resolution of 0.5 ftthat covers different cities. The experimental results using the IND data set demonstrates the mean F1 scores and the average
Intersection over Union scores could increase by approximately 7.0% and 7.2% compared to other methods, respectively.
Publisher
American Society for Photogrammetry and Remote Sensing
Subject
Computers in Earth Sciences
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献