Computer models of oil and gas reservoirs are used to predict future production and to predict where pockets of remaining oil might be located. Improving the reliability of these predictions by incorporating the maximum amount of information from seismic and well production data is the focus of this project. The result will be improvements in reservoir management and field development with subsequent reduction in CO2 footprint.
One key to improving the ability to forecast future events is to ensure that the models are consistent with historically observed behaviour. It is generally necessary to adjust parameters in a computer model so that the predicted seismic data and production data agree with the data that was actually observed. Calibration of the model is challenging because the amount of data that is provided by repeated seismic surveys can be exceptionally large, and calibration of a large reservoir flow model can be very difficult in that case. In this project, we have developed methods for calibrating large reservoir models to seismic and production data such that errors in the forecasts are reduced and the uncertainty is properly quantified. In our data assimilation problems, both production and seismic data are subject to measurement errors and neither the reservoir flow model nor the seismic model is perfect. This, also, makes calibration difficult because overfitting noisy or biased data to an incorrect model can result in biased predictions that are too confident. We have developed methods for assessing the quality of calibrated reservoir models and forecasts on large problems. We have also evaluated the value of different types of seismic data, and we have developed methods for identifying the sources of imperfections in the reservoir model: missing parameters, missing processes, and parameters for which the prior uncertainty is too small. To assist these methods we have developed novel visualization methods that couple with existing 3D visualization software. Finally, we have developed recommendations for a standardized workflow for 4D seismic history matching that results in a reduced model error and better forecast ability. In some cases, fundamental properties, such as, the conceptual model of the reservoir, or the data noise, are themselves uncertain. In that case, we have developed various methods including this uncertainty into the data assimilation problem, giving more reliable forecasts of future reservoir behavior.
Methods developed in this project have been tested using real production and seismic data from the Edvard Grieg field, the Norne field, and the Volve field. In collaboration with the operator, the standardized workflow for 4D seismic history matching was successfully applied to the Edvard Grieg field. To ensure sufficient industry relevance, project members have arranged regular one-to-one meetings with the industry partners. To increase the competence on ensemble-based methods in the companies, the project has also given courses for many of the industry partners. In addition to publishing the results in scientific journals, we have made the data-assimilation software publicly available as a code repository on GitHub. Based on the results of the project, at least, one of the project partners is currently implementing in-house software for ensemble-based data assimilation. Two partners, who already have in-house software, have expanded the software functionality by incorporating results from the project. Since the project results are published, we expect that commercial software providers will re-implement some results in their commercial tools, further benefiting trade and industry.
The immediate result of this project has been better methods for utilizing data from 4D seismic surveys for improved reservoir characterization. This includes improved methods for seismic modelling, improved calibration methods and new diagnostic tools. The comprehensive review paper on 4D seismic history matching provides a well-founded overview of the current challenges in the field. Part of the project was to educate two PhD-students. This will directly lead to more expertise on the topic. The project has proposed recommendations for a standardized workflow for 4D seismic history-matching and demonstrated this workflow on the Edvard Grieg field. In addition to publishing the results in scientific journals, we have made the data-assimilation software publicly available as a code repository on GitHub. Based on the results of the project, at least, one of the project partners is currently implementing in-house software for ensemble-based data assimilation. Two partners, who already have in-house software, have expanded the software functionality by incorporating results from the project. Since the project results are published, we expect that commercial software providers will re-implement some results in their commercial tools, further benefiting trade and industry.
The longer-term result will be improved management for complex fields with 4D seismic data, higher ultimate recovery and reduced emissions. We believe that the project results have contributed to improved methods for subsurface characterisation. In addition to more efficient oil and gas production, this is also beneficial for new energy solutions in the subsurface, such as H2 and CO2. Hence, we anticipate that the project results will bring long-term changes when different commercial users apply the project result for energy solutions in the subsurface.
While the acquisition of time-lapse seismic data has become common in the Norwegian sector of the North Sea, the quantitative use of the data for improvement of forecasts from reservoir models has still not become standard so the information in the seismic data is not being optimally utilized for field development and reservoir management. Challenges to the assimilation of seismic data include potentially large amounts of data, significant errors in modeling of seismic attributes and large nonlinearity in the relationship between model parameters and data. The processes occurring in each field are different, hence there is a need in each case, to identify, simulate and extract the most informative 4D attributes from the data. Ensemble-based methods have been shown to be highly effective at assimilating large amounts of data into complex models, and have shown promise for the assimilation of 4D seismic data, but have limitations for highly nonlinear problems and can underestimate uncertainty. During the last 5 years we have developed and demonstrated methodologies for 4D seismic history matching on real fields. In this project, we will address new and important research questions identified in collaboration with industry experts: updating of facies or rock types in complex geology, dealing with model deficiency, methods for model improvement when a model is judged deficient, and validation of forecast reliability.
By focusing early in the project on the application to real field examples, we will ensure that the research effort is directed at the most important challenges. Also, by evaluating forecast reliability in a probabilistic framework, we will ensure that the methods produce models that are most useful for reservoir management and field development.