Novel Stuck Pipe Troubles Prediction Model Using Reinforcement Learning
-
Published:2022-02-21
Issue:
Volume:
Page:
-
ISSN:
-
Container-title:Day 2 Tue, February 22, 2022
-
language:
-
Short-container-title:
Author:
Alzahrani Majed1, Alotaibi Bader1, Aman Beshir1
Abstract
Abstract
Predicting stuck pipe problems during oil and gas drilling operation is one of the most complex problems in the drilling business. The complexity of the problem is driven not only by the complexity of the natural factors, but it extends to the nature of the drilling operation itself. The drilling operation is continuously influenced by a dynamic smart system. The dynamic part of the system is impacted by natural forces like formation related characteristics, and also is impacted by human activities during the operation such as drilling, tripping and hole cleaning. The smartness of this system is driven by the fact that the operation is controlled by a number of experts, i.e. drilling engineers, trying to run the best sequence of operations using best operation parameters to achieve operation objective. At the top of that, the engineers can change their operation plan whenever they find it necessary to address any operational condition, including a potential stuck pipe problem.
In this paper we prove the stuck pipe prediction problem is not a binary classification problem. Instead, we define the stuck pipe prediction problem as a multi-class problem which takes into consideration the dynamic nature of the drilling operation. A reinforcement learning based algorithm is proposed to solve the redefined problem, and its performance and evaluation results is shared in details. The accuracy of the developed algorithm in terms of detecting true stuck pipe events is shown. The results will compare the performance of different machine learning algorithms, which is then used to justify the selection of the best performing method. In addition, we show the accuracy performance improvement through time by employing the feedback channel to retrain the model. The presented method is using a reinforcement logic, in which the solution is connected to the operation reporting to label the solution prediction for false and true predictions. This information is then used to return the neural networks to learn new operational patterns to enhance accuracy.
Reference7 articles.
1. Ma, Hongze, Yu, Gaoming, She, Yuehui, and YonganGu. "Waterflooding Optimization under Geological Uncertainties by Using Deep Reinforcement Learning Algorithms." Paper presented at the SPE Annual Technical Conference and Exhibition, Calgary, Alberta, Canada, September 2019. doi: https://doi.org/10.2118/196190-MS 2. Miftakhov, Ruslan, Al-Qasim, Abdulaziz, and IgorEfremov. "Deep Reinforcement Learning: Reservoir Optimization from Pixels." Paper presented at the International Petroleum Technology Conference, Dhahran, Kingdom of Saudi Arabia, January 2020. doi: https://doi.org/10.2523/IPTC-20151-MS 3. Jin, Kefan, Wang, Hongdong, and HongYi. "End-to-End Trajectory Tracking Algorithm for Unmanned Surface Vehicle Using Reinforcement Learning." Paper presented at the The 29th International Ocean and Polar Engineering Conference, Honolulu, Hawaii, USA, June 2019. 4. Žácik, Tibor, Mracka, Igor, Hajossy, Rudolf, and MarekHycko. "Reinforcement Learning in Gas Transport Control." Paper presented at the PSIG Annual Meeting, Deer Valley, Utah, USA, May 2018. 5. Talavera, Alvaro Lopez, Túpac, Y. J., and MarleyM. Vellasco. "Controlling Oil Production in Smart Wells by MPC Strategy with Reinforcement Learning." Paper presented at the SPE Latin American and Caribbean Petroleum Engineering Conference, Lima, Peru, December 2010. doi: https://doi.org/10.2118/139299-MS
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|