Large-scale Flood Mapping based on Deep Learning, Remote Sensing, and Cloud Computing
Topics: Remote Sensing
, Cyberinfrastructure
, Hazards, Risks, and Disasters
Keywords: Disaster Mapping, Deep Learning, GeoAI, Cloud Computing, Microsoft Azure Machine Learning
Session Type: Virtual Paper Abstract
Day: Saturday
Session Start / End Time: 2/26/2022 11:20 AM (Eastern Time (US & Canada)) - 2/26/2022 12:40 PM (Eastern Time (US & Canada))
Room: Virtual 20
Authors:
Bo Peng, Department of Geography, University of Wisconsin-Madison
Qunying Huang, Department of Geography, University of Wisconsin-Madison
,
,
,
,
,
,
,
,
Abstract
With an increasing volume of remote sensing (RS) data, many artificial intelligence (AI) based methods have been developed for RS image recognition, primarily by data-driven supervised learning to discover the underlying image patterns. However, such methods may not be applicable for large-scale flood mapping with RS imagery in real time on a large scale due to the following major limitations: (1) Standard datasets with human annotations of floodwaters on RS images are not available for training deep learning models. (2) Extensive computing resources are required to facilitate deep learning model training and large-scale data processing for flood mapping. In response, this paper develops the first very high resolution (VHR) satellite image dataset with fully labeled flood masks. With the support from the Microsoft Azure cloud computing service, this paper proposes a bi-temporal image segmentation model to incorporate pre- and post-disaster VHR satellite images for large-scale flood mapping. The model is trained and tested with the developed flood mask labels on the Azure Machine Learning Compute. Experimental results demonstrate that the proposed model and the developed flood dataset enable large-scale flood mapping over unseen testing sites in near real-time with the power of Azure cloud computing.
Large-scale Flood Mapping based on Deep Learning, Remote Sensing, and Cloud Computing
Category
Virtual Paper Abstract
Description
This abstract is part of a session. Click here to view the session.
| Slides