Abstract
Building extraction from remote sensing images is the process of automatically identifying and extracting the boundaries of buildings from high-resolution aerial or satellite images. The extracted building footprints can be used for a variety of applications, such as urban planning, disaster management, city development, land management, environmental monitoring, and 3D modeling. The results of building extraction from remote sensing images depend on several factors, such as the quality and resolution of the image and the choice of algorithm.The process of building extraction from remote sensing images typically involves a series of steps, including image pre-processing, feature extraction, and classification. Building extraction from remote sensing images can be challenging due to factors such as varying building sizes and shapes, shadows, and occlusions. However, recent advances in deep learning and computer vision techniques have led to significant improvements in the accuracy and efficiency of building extraction methods. This research presents a deep learning semantic segmentation architecture-based model for developing building detection from high resolution remote sensing images. The open-source Massachusetts dataset is used to train the suggested UNet architecture. The model is optimized using the RMSProp algorithm with a learning rate of 0.0001 for 100 epochs. After 1.52 hours of training on Google Colab the model achieved an 83.55% F1 score, which indicates strong precision and recall.
Publisher
Perpetual Innovation Media Pvt. Ltd.