Affiliation:
1. Urban Resilience.AI Lab, Zachry Department of Civil and Environmental Engineering Texas A&M University Texas USA
2. Department of Computer Science and Engineering Texas A&M University Texas USA
Abstract
AbstractStreet view imagery has emerged as a valuable resource for urban analytics research. Recent studies have explored its potential for estimating lowest floor elevation (LFE), offering a scalable alternative to traditional on‐site measurements, crucial for assessing properties' flood risk and damage extent. While existing methods rely on object detection, the introduction of image segmentation has expanded the utility of street view images for LFE estimation, although challenges still remain in segmentation quality and capability to distinguish front doors from other doors. To address these challenges in LFE estimation, this study integrates the Segment Anything model, a segmentation foundation model, with vision language models (VLMs) to conduct text‐prompt image segmentation on street view images for LFE estimation. By evaluating various VLMs, integration methods, and text prompts, the most suitable model was identified for street view image analytics and LFE estimation tasks, thereby improving the coverage of the current LFE estimation model based on image segmentation from 33% to 56% of properties. Remarkably, our proposed method, ELEV‐VISION‐SAM, significantly enhances the availability of LFE estimation to almost all properties in which the front door is visible in the street view image. In addition, the findings present the first baseline and quantified comparison of various vision models for street view image‐based LFE estimation. The model and findings not only contribute to advancing street view image segmentation for urban analytics but also provide a novel approach for image segmentation tasks for other civil engineering and infrastructure analytics tasks.
Funder
National Science Foundation