There are many problems with remote sensing images, such as large data scales, complex illumination conditions, occlusion, and dense targets. The existing semantic segmentation methods for remote sensing images are not accurate enough for small and irregular target segmentation results, and the edge extraction results are poor. The authors propose a remote sensing image segmentation method based on a DCNN and multiscale feature fusion. Firstly, an end-to-end remote sensing image segmentation model using complete residual connection and multiscale feature fusion was designed based on a deep convolutional encoder–decoder network. Secondly, weighted high-level features were obtained using an attention mechanism, which better preserved the edges, texture, and other information of remote sensing images. The experimental results on ISPRS Potsdam and Urban Drone datasets show that compared with the comparison methods, this method has better segmentation effect on small and irregular objects and achieves the best segmentation performance while ensuring the computation speed.