Automatic detection of subsea pipelines with Deep Learning

Seabed pipelines are essential infrastructure for the transportation of oil and gas. Timely inspection is necessary to verify their integrity and determine the need for maintenance, as failures in these lines can interrupt vital oil and gas distribution and cause environmental damage. Inspecting seabed pipelines, however, is a difficult task due to their remote location and long length. Autonomous underwater vehicles (AUVs) such as HUGIN (Figure 1), equipped with pipeline detection and tracking algorithms, represent cost-effective innovations in these operations.

Autonomous Pipeline Inspection

Traditionally, external pipeline inspection has been done with large, advanced offshore vessels using towed or remotely operated vehicles (ROVs). The objective is to detect pipeline burial, exposure, free spans and buckling, as well as indications of damage from third party activities, such as trawling and anchoring, and debris near the pipeline. pipeline. Over the past two decades, AUVs have emerged as a more efficient and less expensive solution, as they are stable platforms that can move faster (typically 3-5 knots versus 1-2 knots for ROVs). inspection) and operate without constant monitoring from a mothership. Typical payload sensors for these AUVs include multi-beam echo sounders (MBES), side scan sonars and optical cameras.

To collect high-quality sensor data for inspection purposes, the AUV must follow the pipeline at a specified distance and height. Global position estimates from the vehicle’s inertial navigation system will not suffice, due to inevitable drift in estimates over time and uncertainties in past pipeline position data. One solution is the automatic detection of pipelines in sensor data to provide real-time input to the vehicle’s control system, which then maintains the desired relative position and orientation (see flowchart in Figure 2). This is the basis of the PipeTracker system [1], developed and refined in cooperation since 2010 by the Norwegian Defense Research Establishment (FFI) and Kongsberg Maritime (KM). The PipeTracker applies traditional image analysis techniques to detect and track pipelines in sensor data from sidescan sonar or MBES.

Figure 1: A HUGIN AUV prepares to dive.

Sensor data

An MBES is an active sonar for bathymetric mapping of a swath centered under the AUV. The sensor estimates both the intensity and the delay of the backscatter from the seafloor, providing the reflectivity and relative depth values. Figure 3 shows an example of pipeline data collected with a HUGIN AUV in shallow water off the coast of Brazil, while Figure 4 shows a TileCam camera image of a corresponding sub-area.

Through a collaboration between FFI, the University of Oslo and KM, we investigated how to use deep learning, described in more detail in the next section, for automatic online detection of seafloor pipelines in data. MBES. To this end, we have created and defined a method for annotating (labeling) pipelines in MBES data. Additionally, we have adapted and extended state-of-the-art deep learning-based object detection techniques to this new task and imaging format.

The dataset used is a collection of MBES images from 15 pipeline inspection missions collected with different HUGIN AUVs by KM and FFI at various locations around the world.

Figure 2: Pipeline detection and tracking flowchart. The AUV uses both prior knowledge of pipeline position and analysis of real-time sensor data to maintain the specified detection geometry. Green boxes indicate the scope of our work, while blue boxes give the broader browsing context.

Deep learning for pipeline detection

Deep learning refers to multi-layered artificial neural network (ANN) models. Larger models coupled with increasing computational capabilities and huge amounts of data have led to the success of ANNs. In recent years, ANNs combined with deep learning have become the most successful method for countless data processing tasks, such as image classification, detection, tracking, speech recognition, synthesis , translation, Chess and Go games, and many more.

With the impressive achievements of deep learning, we sought to adapt and test it on the pipeline detection task. To get the most out of deep learning, we turned to state-of-the-art methods on similar tasks. In particular, we searched for models capable of detecting and classifying different objects in optical images. Optical image processing and interpretation are two of the earliest beneficiaries of deep learning, and have also spearheaded much of its progress. We then adapted the model, a combination of ResNet50 [2] and you only watch once (YOLO) [3], to the idiosyncrasies of the seabed pipeline detection task and the MBES data format. Figure 5 illustrates our deep learning model.

Deep learning has two stages: training and inference. The training phase shows the model millions of examples of what we want it to do, and through this training phase the model gets better and better. After training, the model should understand the general principle of the task and work well on new, similar and unpublished examples. When the model is used to interpret new examples after the training phase, it is in inference mode. The performance in inference mode is the measure of the success of the model.

Figure 3: Depth and reflectivity data from the MBES EM2040 mounted on a HUGIN AUV. A single pipeline (inner diameter 15 cm) is visible in both data channels at approximately beam number 200. Image sizes are 22 mx 61 m.

A key factor for a successful deep learning model is the sample data it trains on. For this we need to define and create labels that can supervise the model to learn its task. Since existing deep learning models such as ResNet and YOLO do not support our data format and task, we created a tool to manually annotate pipelines in MBES data. These annotations defined the goal of the task, which is to automatically predict whether or not an MBES image segment contains seafloor pipelines, and where the detected pipelines are. In addition to data and labels, forming an ANN requires mathematically formulating the purpose of the task. Due to the peculiarities of the task and the data format, we have also proposed a new subsea pipeline detection task objective, the details of which are described in [4].


We evaluated the trained model in two ways: (i) how well it predicted whether an MBES image segment contained a pipeline, and (ii) how accurately it located the top of the pipelines. In tests, our model correctly predicted pipelines in more than 85% of cases never seen before; in other words, sample data that the model had not trained on. Additionally, the model also correctly located pipelines with an average offset of less than two pixels from the top of the labeled pipeline, where pixels are pings and beams, interchangeably. For reference, the width of the pipeline in the depth channel in Figure 3 spans about five beams.

Figure 4: A HUGIN camera image of the pipeline. The image covers an area of ​​seabed approximately 6m x 4m centered on ping number 200.

conclusion and perspectives

This work demonstrates that deep learning can be used effectively to detect pipelines in MBES data. Although traditional image analysis algorithms are already used successfully to detect seabed pipelines, they are usually hand-crafted and, to some extent, tailored to the application. With deep learning, however, the model learns through training to detect pipelines in different scenarios. A deep learning model can often be improved simply by using more example data. This means that detection performance can be improved over time as less common pipe configurations and environments are encountered, without the need to develop or refine new algorithms.

In further work, we will consider applying deep learning to detect seafloor pipelines in side-scan sonar and optical images. Additionally, we plan to use deep learning to assess the state of the pipeline across all the different sensor data, which would further automate inspection.

Figure 5: Illustration of the pipeline detection model. At the top, MBES image segments are entered into the model. At the bottom, five variables represent the results of the detection, where c indicates whether the input contains a pipeline or not and x1,x2,x3,x4 gives the coordinates of the pipeline segment if the input contains a pipeline. The blue box is an established deep learning architecture, while the green boxes customize the model to the MBES data format and the seabed pipeline detection task.

The references

[1] Midtgaard, Ø., Krogstad, TR and Hagen, PE, 2011, Sonar detection and

monitoring of subsea pipelines, Acts of Underwater Acoustics

Conference Measures, Kos, Greece, p. 9.

[2] He, K., et al., 2015, Deep Residual Learning for Image Recognition

tion, arXiv: 1512.03385 [cs] [Online]. Available: http:

// (accessed 01/27/2022).

[3] Redmon, J., et al., 2016, You Only Look Once: Unified,

Real-time object detection. arXiv:1506. 02640 [cs]. [Online]. Available: http://arxiv. org/abs/1506. 02640 (visited on 02/19/2022).

[4] Schøyen, VS, Warakagoda, ND, and Midtgaard, Ø., 2021, Seafloor Pipeline

Detection with Deep Learning, Proceedings of the Northern Lights Deep

learning workshop, flight. 2, 2021 Apr 19. doi: 10.7557/18.5699.

Source link

Comments are closed.