The challenge of reservoir modeling
Reservoir modeling should be the ultimate tool in the asset team’s toolbox to look into the future and make predictions about the likelihood of outcomes; a tool to help ensure better decisions. But the evidence is clear that despite all the improvements in terms of being able to build bigger and faster reservoir simulation models, the resulting forecasts are disappointingly wrong.1 The focus of many technology providers has been put on solving a forward model faster: faster simulators, faster geostatistics. Arguably this is just getting us to wrong predictions quicker.
The industry must abandon the notion that we can ever create the “perfect” digital twin of the reservoir – the so-called “case centric” approach – which is really the root cause of underperforming assets, unexpected results of infill drilling programs and generally the inefficiencies of the development and production process. Instead, we should focus on treating reservoir modeling as the inverse problem that it truly is.
- Bratvold, R. B., Mohus, Petutschnig, D., & Bickel, E. (2019, September 23). Production Forecasting: Optimistic and Overconfident; Over and Over Again. Society of Petroleum Engineers. doi:10.2118/195914-MS
Ensemble-based modeling
Fortunately, innovators many years ago suggested a framework and the use of tools that enable us to embrace the fact that, at any given time, our current perception about the reservoir is uncertain and will change as we gather new data and continuously learn – the so-called “uncertainty-centric” approach. In this approach we try to determine the likelihood of reservoir parameters (porosity, permeability, etc.) having a certain value using the observed data to constrain us. This is what solving an inverse problem is all about.
Ensemble modeling uses specific algorithms to generate a large number of models that respect all the existing static data as well as the dynamic history-matching information. The process avoids user bias. Through rapid iterations and the immediate inclusion of any new data, the spread of possible models is narrowed, but at no point are any of the many models that continue to fit the data rejected.
The Resoptima solution
Resoptima was founded in 2010 to bring to market the first ensemble-based modeling solution in the industry. ResX was launched in 2013 and has grown steadily in both its scope and versatility, being applied to over 130 E&P assets around the globe to date. The workflows that are built into the application guide the reservoir engineers and geologists through the steps from creating the initial ensemble to analyzing the result. The IRMA (Integrated Reservoir Management & Analysis) application, itself also designed from the ground up to handle ensemble data, provides insights to guide decisions about anything from infill drilling to EOR projects.
Data assimilation done right
Building a reservoir model in 3D means that we have to use the information found in our collected data to assess the millions of unknown model components, including 2D surfaces (structural horizons, …), 3D grid properties (litho-facies, …) and associated petrophysical properties, and scalar variables (fluid contacts, the shape of the relative permeability curves, …). All of this, while capturing the uncertainty in the model assumptions and the collected data.
This makes the family of ensemble Kalman-based algorithms, introduced by Geir Evensen in 1994, particularly suited for data assimilation in reservoir applications, as they scale linearly with the number of model components and allow us to incorporate and propagate uncertainty in all parts of the modeling and data assimilation process. However, despite the attractiveness of the ensemble Kalman-based algorithms in terms of speed, there are many unresolved issues that make it difficult to apply them to reservoir applications. This was clearly seen by the many initial disappointments in the early 2000s when ensemble Kalman-based algorithms were first introduced in oil and gas.
Resoptima finally settled on the application of the iterative ensemble Kalman smoother, which delivers impressive performance and results in the context of subsurface uncertainty resolution.
The key factor for success in any data assimilation algorithm is to preserve the consistency between the input static data measurements (such as seismic, well logs, geological concepts, …) and the dynamic data (such as production, 4D seismic, …). This is achieved by performing model updates on true model components, and not the classical derived simulation model input parameters. In addition, the combined insights from more than 20 years of research puts ResX in a position to utilize the key strengths of the ensemble Kalman-based algorithms and avoid their weaknesses. Examples of this include the use of iterative ensemble smoothers, and adaptive pluri-Gaussian facies modeling techniques as key ingredients to our software solution.
Ensemble methods in oil and gas have evolved into something more than just another history matching algorithm. Today, ResX provides automated and updatable workflows for reservoir modeling using a large variety of geostatistical methods, sparse regression, metric learning, and stochastic programming, just some of the ingredients that are part of its state-of-the-art ensemble management capabilities for reservoir modeling and data assimilation.
For a more detailed story of the adoption of ensemble Kalman in our industry, please read “Ensemble Kalman algorithm hype cycle – a retrospective” written by Resoptima CSO Jon Sætrom in 2018, it can be found here.