Author:
Mohanty Sharada Prasanna,Czakon Jakub,Kaczmarek Kamil A.,Pyskir Andrzej,Tarasiewicz Piotr,Kunwar Saket,Rohrbach Janick,Luo Dave,Prasad Manjunath,Fleer Sascha,Göpfert Jan Philip,Tandon Akshat,Mollard Guillaume,Rayaprolu Nikhil,Salathe Marcel,Schilling Malte
Abstract
Translating satellite imagery into maps requires intensive effort and time, especially leading to inaccurate maps of the affected regions during disaster and conflict. The combination of availability of recent datasets and advances in computer vision made through deep learning paved the way toward automated satellite image translation. To facilitate research in this direction, we introduce the Satellite Imagery Competition using a modified SpaceNet dataset. Participants had to come up with different segmentation models to detect positions of buildings on satellite images. In this work, we present five approaches based on improvements of U-Net and Mask R-Convolutional Neuronal Networks models, coupled with unique training adaptations using boosting algorithms, morphological filter, Conditional Random Fields and custom losses. The good results—as high as AP=0.937 and AR=0.959—from these models demonstrate the feasibility of Deep Learning in automated satellite image annotation.
Reference50 articles.
1. Tensorflow: a system for large-scale machine learning;Abadi,2016
2. Mask r-cnn for object detection and instance segmentation on keras and tensorflow
AbdullaW.
2017
3. A review on semantic segmentation from a modern perspective;Atif,2019
4. Segnet: a deep convolutional encoder-decoder architecture for image segmentation;Badrinarayanan,2015
5. The OpenCV library;Bradski;Dr Dobb’s J. Software Tools.,2000
Cited by
76 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献