Identifying Digital Analogs and Extracting Petrophysical Properties from Images using Vision Transformer Architecture
Abstract:
The use of machine learning continues to grow in the oil and gas industry with the increased availability of large, diverse, and real-time datasets. Actionable insights created from this data drive several of our business decisions. These applications span several orders of time- and length-scales from the nanometer- to the kilometer-scale and from decades-old data streams to real-time decision making.
In this talk, I hope to demystify machine learning and showcase how it can facilitate reservoir characterization tasks that were unheard of just a few years ago. The interesting aspect of my talk is that machine learning is not limited to routine core- or log- or seismic-derived analyses. Machine learning algorithms have given us the capability to harness the information content of images to extract petrophysical and mechanical properties as well as identify geologic analogs.

Figure showing grayscale SEM image on the left and the corresponding three attention maps generated by vision transformers operating in self-supervised mode. The attention maps assist both classification and segmentation tasks for petrophysical property estimation.
I cover the classical convolutional neural network (CNN) based approaches to classification and segmentation. Convolutional Neural Networks (CNN) are the dominant approach for such applications and are well-suited for these tasks given the inductive biases inherent to the process of training, which is invariance under translation or rotation. However, there is a nascent but growing body of work that showcases the promise of vision transformer-based algorithms for classification and segmentation tasks. Vision transformers (ViTs) have shown the ability to learn contextual information related to long- and short-range dependencies. Remarkably, this allows ViTs to identify key image features through attention maps without prior training on labeled datasets. This talk underscores the remarkable ability of vision transformers embedded in a student-teacher framework to learn underlying features and enable rapid, self-supervised classification and extraction of petrophysical properties.

Speaker's Bio:
Deepak Devegowda is Professor and Mewbourne Chair of Petroleum Engineering at the University of Oklahoma. He obtained his MS and PhD degrees in Petroleum Engineering from Texas A&M University. His research interests include fundamental atomistic/molecular modeling to understand the processes behind carbon sequestration, hydrogen storage and unconventional oil and gas recovery. He also works extensively on subsurface applications of machine learning.