Skip to main content Skip to secondary navigation

Production Forecasting Using Transfer Learning of Pretrained Deep Model

Main content start

Investigator: Zainab Al-Ali 

Fluid rate measurement and forecasting is crucial for the field development. The goal of this project is to develop a virtual-based flow meter to forecast the production rate of wells from the Norwegian Volve field using deep learning models. This project introduces a novel idea showing the implementation of transfer learning and pretrained deep Neural Basis Expansion Analysis for Interpretable Time Series Forecasting (N-BEATS) model in production forecasting. Prior work was limited to applying feature-based linear regression algorithms and traditional sequence deep learning models, mainly RNNs, LSTMs, and Long Short Time Series Network (LSTnet), to predict the pressure response of a single well with a single fluid. In this project -- we extended the application of deep learning research by introducing two methods, an attention-based model using the Temporal Fusion Transformer (TFT) and a transfer learning approach using pretrained N-BEATS on M4 series. Both TFT and the pretrained N-BEATS models outperformed the traditional LSTM model, increasing the test score (MAE) by 0.04 and 0.08, respectively, demonstrating excellent matching results by N-BEATS. We concluded that using transfer learning and pretrained N-BEATS model eliminates the previous disadvantages of LSTM models requiring multivariate features and a large training history. This research shows promising results for using transfer learning and N-BEATs model, especially for new or green fields with limited historical data.

Methods
In this work, we relied mainly on applicable deep learning models designed to handle the sequence dependency of time series data. In our first application, we trained the dataset using the LSTM model to predict well oil rates. One of the main challenges of this project is the limited history of the field having a small dataset size and limited features. The target oil rate data were also complex with no clear trend or seasonality. Hence, it was rather challenging to fit the LSTM model requiring many iterations and complex hyperparameter tuning. Even with that being done, the model may not generalize well for the test dataset. The second model we used was the Temporal Fusion Transformer (TFT). The TFT model integrates the mechanisms used in LSTM layers, the attention heads in transformers and the Gated Residual Network (GRN) to learn the relationship along the time axis. The TFT model applied on the dataset outperformed the LSTM model, however, the prediction did not reach the desired accuracy. To overcome these challenges, we adopted a new transfer or meta learning approach for time series prediction. We utilized a recently developed deep learning model called N-BEATS, Neural Basis Expansion Analysis for Interpretable Time Series Forecasting, to pretrain it on the large M4 time series dataset and use it to predict the target oil rate in our testing set.

Temporal Fusion Transformer (TFT)
Temporal fusion transformer is an attention-based deep learning model for time series forecasting. The building block for TFT consists of a Gated Residual Network (GRN) comprises of two dense layers and two activation functions, Exponential Linear Unit (ELU) and gated linear unit (GLU), allowing for both skip connections and gating for efficient information flow. It also contains a Variable Selection Network (VSN) for selecting the most relevant features at each time step. Time-dependent processing is based on, an LSTM encoder-decoder for local processing and a self-attention layer for learning long range dependencies across different time steps. The architecture used in this project was built utilizing the Pytorch library, as showing in Figure 1. The TFT is trained by minimizing the quantile loss function summed across a certain quantile, (q) [0.1, 0.5, 0.9], refer to Equation 1.

Figure 1: TFT model architecture

QL(y,ŷ,q)=max⁡[q ((y),(1-q)(y)]                            Equation 1

Transfer Learning Using PretrainedN-BEATS on M4 Dataset
N-BEATS is a new state of the art deep neural model developed in 2019 to handle time series prediction tasks. The basic building block of N-BEATS consists of a multilayer fully connected network with ReLU activation function. The block predicts two expansion coefficients, forward (forecast) and backward (backcast). The blocks are organized into stacks using doubly residual stacking principle. Partial forecasts predicted by each block are hierarchically aggregated first at the stack level and then at the overall network level .The N-BEATS algorithm outperformed all existing deep learning models when applied on the M4 dataset improving forecasting accuracy by 11%. The M4 is a large and highly heterogeneous dataset containing a collection of 100,000 time series data from business, financial and economic forecasting problems. The Symmetric Mean Absolute Percentage Error (SMAPE) was used as a loss function during training. A transfer or meta learning approach was used to predict the oil rate series of our testing dataset via sharing the N-BEATS model parameters and weights learned from the M4 series. Figure 2 shows our adopted transfer learning modeling approach.

Figure 2:N-BEATS model architecture

SMAPE=100%n2×|y-ŷ |y+ŷ                        Equation 2

Result and Discussion
With the tuned hyperparameters currently achieved for TFT and N-BEATS models, the N-BEATS model outperformed the TFT model. Figure 3 shows the prediction results of well-1 and well-2 for all models for the testing dataset. N-BEATS model was able to achieve excellent match of the forecasted oil rates from the test data for well-1 and well-2. This transfer learning approach from the abundant M4 time series data was able to overcome the limitation of small dataset size yielding successful prediction without supplying the model with any covariate features.

Figure 3:Oil rate prediction using three models BlockRNN, TFT and N-BEATS