Automating the detection and classification of methane pollution : integrating deep learning and techno-economic analysis

Placeholder Show Content

Abstract/Contents

Abstract
Mitigating methane leakage from the oil and gas (O& G) system has become an increasing concern for solving the climate change challenge. Effective methane leak detection, quantification and classification can make the mitigation process more efficient and cost effective, because methane emissions reductions can have large climate benefits due to the high global warming potential (GWP) of methane. Optical gas imaging (OGI), a frequently used method to detect methane leaks in leak detection and repair (LDAR) programs, is labor-intensive and cannot automatically inform operators of the detection and leak size results. Few studies have examined the possibility of automatic leak detection and classification using videos taken by the infrared (IR) camera, an OGI device. In this study, we investigate the application of deep-learning-based computer vision techniques in addressing two challenges faced by methane leak mitigation: leak/non-leak binary classification and leak volume classification. Additionally, we perform techno-economic analysis of the cost effectiveness of computer-vision-enabled automated natural gas leakage detection technologies. First, we collected the first large-scale methane leak video dataset - GasVid, which has around 1 M frames from labeled videos of methane leaks from different leaking equipment, covering a wide range of leak sizes (5.3-2051.6 gCH4/h) and imaging distances (4.6-15.6 m). GasVid covers a wide range of realistic leak scenarios that might be encountered at operating facilities. Second, for the problem of leak detection using images, we develop a computer vision approach, called GasNet. GasNet is an OGI-based leak detection using convolutional neural networks (CNN) trained on methane leak images extracted from GasVid to enable automatic detection. We examine different background subtraction methods to isolate the methane plume in the foreground from noisy or variable backgrounds. We assess the ability of GasNet to perform leak detection by comparing it to a baseline method that uses an optical-flow based change detection algorithm. We explore the sensitivity of results to the CNN structure, and the results indicate a moderate-complexity variant gives the best performance. The generated probability of detection (PoD) curves show that the detection accuracy (fraction of leak and non-leak images correctly identified by the algorithm) can reach as high as 99% in some cases, and the overall detection accuracy can exceed 95% across all leak sizes and imaging distances. Binary detection accuracy exceeds 97% for large leaks (around 710 gCH4/h) imaged closely (around 5-7 m). Third, for the problem of leak size classification based on video streams, we study three deep learning algorithms that are designed to analyze video streams, or a series of contiguous frames. The methods include 2D CNN applied iteratively to each frame, 3D CNN applied across all frames at once, and the Convolutional Long Short Term Memory (ConvLSTM). We find that 3D CNN is the best performing and most robust architecture, which we then name VideoGasNet. The leak-non-leak detection accuracy of VideoGasNet can reach 100%, and the 3-class (small-medium-large) classification accuracy is 78.2% with our 3D CNN network. VideoGasNet can be considered a more advanced version of GasNet by capturing features not only from spatial spaces, but also from temporal information across frames. It greatly extends the capabilities of IR camera-based leak monitoring system from leak detection only to automated leak classification with high accuracy and fast processing speed. Fourth, we assess the cost effectiveness of computer-vision-enabled automated OGI (AOGI) developed using the VideoGasNet platform. Classical LDAR requires repair of all the leaks found during the LDAR survey, while we also examine a new leak detection, classification, and repair (LDCAR) program, which categorizes leaks by size (small, medium or large) and repairs a subset of the largest leaks. In theory, LDCAR only repairs large leaks that contribute most of the emissions, thus potentially leading to greater mitigation cost effectiveness. We use empirical data and the Fugitive Emissions Abatement Simulation Toolkit (FEAST) to perform techno-economic analysis of AOGI and traditional OGI applied in both LDAR and LDCAR. Even though AOGI has lower detection sensitivity than a person and faces limits in the field of view (FoV) obtainable for leak detection, AOGI could reach up to 97.4% reduction in net private cost of mitigation (NetCOM) compared with OGI in LDAR. This is achieved by reducing the detection cost by 89.2% and increasing the survey speed by 400%. Due to the lower sensitivity of AOGI, in order to achieve the same mitigation goal as OGI, the survey frequency of AOGI needs to be increased from 2 to 2.8 times a year, nearly a 50% increase in visits. However, even with a survey frequency of 2.8 times a year, the NetCOM of AOGI is still only 9.8% that of OGI. The NetCOM of LDCAR, which is positively related with the ratio of the total cost of mitigation to the volume of saved gas, may not always be the lowest. The cost effectiveness of LDAR and LDCAR are strongly affected by repair cost and survey speed. In addition, by considering the social impacts of methane emissions and incorporating the concept of social cost of methane (SCM), we introduce a new evaluation metric named SocialNetCOM, in which the quantified social benefits of methane mitigation are included. SocialNetCOM is smaller than NetCOM and can even have negative values, indicating the benefits of mitigation programs can be larger than the associated costs after taking the SCM into account. We also investigate the influence of imperfect detection and classification accuracy on the full cost-benefit analysis by examining the effect of false positive errors and false negative errors. In summary, our developed computer-vision-based AOGI system comprehensively uses the advantage of deep learning and computer vision and brings multiple benefits including: reduction in labor cost, speeding up the survey rate, prioritization of large leaks, maximization of mitigation efficiency and minimization of the net cost of mitigation. It has the potential to be used in OGI surveys for methane leak detection with high cost effectiveness in the real world. The core of this dissertation includes my three primary research papers: (1) Chapter 1 is my first paper - Machine vision for natural gas methane emissions detection using an IR camera. (2) Chapter 2 is my second paper - VideoGasNet: deep learning for natural gas methane leak classification using an IR camera. (3) Chapter 3 is my third paper - Techno-economic analysis of computer-vision-enabled automated natural gas leakage detection. Note that the literature review is included with each chapter as appropriate to the materials in that chapter, rather than being collected into a single literature review chapter. In addition, we explicitly explain the datasets we collected during my PhD in Chapter 4, and we summarize the conclusions and future work in Chapter 5.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2019; ©2019
Publication date 2019; 2019
Issuance monographic
Language English

Creators/Contributors

Author Wang, Jingfan, active 2019-
Degree supervisor Brandt, Adam (Adam R.)
Thesis advisor Brandt, Adam (Adam R.)
Thesis advisor Azevedo, Inês M. L
Thesis advisor Sweeney, James L
Degree committee member Azevedo, Inês M. L
Degree committee member Sweeney, James L
Associated with Stanford University, Department of Energy Resources Engineering.

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Jingfan Wang.
Note Submitted to the Department of Energy Resources Engineering.
Thesis Thesis Ph.D. Stanford University 2019.
Location electronic resource

Access conditions

Copyright
© 2019 by Jingfan Wang
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...