Cross-Oilfield Reservoir Classification via Multi-Scale Sensor Knowledge Transfer
Authors: Zhi Li, Zhefeng Wang, Zhicheng Wei, Xiangguang Zhou, Yijun Wang, Baoxing Huai, Qi Liu, Nicholas Jing Yuan, Renbin Gong, Enhong Chen4215-4223
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we evaluate our approaches by conducting extensive experiments with real-world industrial datasets. The experimental results clearly demonstrate the effectiveness of our proposed approaches to transfer the geological knowledge and generate the crossoilfield reservoir classifications. |
| Researcher Affiliation | Collaboration | 1Anhui Province Key Laboratory of Big Data Analysis and Application, School of Data Science & School of Computer Science and Technology, University of Science and Technology of China, 2Huawei Cloud & AI, 3Research Institute of Petroleum Exploration & Development, Petro China zhili03@mail.ustc.edu.cn,{wangzhefeng, weizhicheng1, wangyijun13, huaibaoxing, nicholas.yuan}@huawei.com, {qiliuql, cheneh}@ustc.edu.cn, {zhouxg69, gongrb}@petrochina.com.cn |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is released or provide a link to a code repository. |
| Open Datasets | No | The experimental data sets are collected from the real industry exploration and production process of a famous oil and gas company, i.e., Petro China 1. These data sets contain a large number of well logs in two main oilfield oilfields, i.e., #9FAB2 and #BF8A9. All the well information and oilfield information have been desensitized to protect data privacy. |
| Dataset Splits | Yes | Table 1 shows the statistics of experimental datasets. For each oilfield... # of wells in training set 178 88 # of wells in test set 88 89 # of unlabeled wells 179 44 |
| Hardware Specification | No | The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used. |
| Experiment Setup | Yes | Hyperparameters Setting. We set the size of the max convolution window to 21 for well logs and designed three convolution scales, i.e. 11, 15 and 21. The number of filters for each convolution was set to 256, 256 and 128. The dimension of the state in the LSTM was set to 128. The dimension of well embedding Vk was set to 100. As a default setting, the activation functions used in the convolution layers and fully connected layer were set to Rectified Linear Unit (Re LU). All weight matrices are randomly initialized by a uniform distribution. Our model was trained with an initial learning rate of 0.01 and was exponentially decayed by a factor of 0.75. The batch size of samples was set to 30000. We stop the training process when the loss on the validation set stabilizes. |