Skip to main navigationSkip to main content
The University of Southampton
NEXUSS - Next Generation Unmanned Systems Science

Computer vision and machine learning for oceanographic research

Supervisors: Dr Daniel Jones, Dr Henry Ruhl (NOC), Dr. Sasan Mahmood (UoS), Dr. Brian J Bett (NOC)

Rationale:

The maturation of digital cameras has led to a rapid proliferation of their use in oceanographic applications. These applications span from geophysical descriptions of seafloor habitat, biological descriptions of water column and seafloor biogeochemistry, biodiversity and ecology, as well as baseline and environmental impact in relation to industry. These industry sectors include oil and gas, carbon capture and storage, and seafloor mining. Image data are now regularly collected by the National Oceanography Centre from a range of robotic platforms from Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs) from shelf to deep seas. Many tools now exist for machine learning and automated annotation of images for complex scenes. However, their take-up in oceanographic applications is very limited when compared to the current and foreseen potential of computer vision tools. The adaptation and refinement of existing tools is necessary to bridge the gaps between current fragmented capability and the streamlined workflows needed to more effectively utilise optics in a timely manner with known confidence. In addition, we wish to examine frameworks whereby some automated image analysis can be done on-board autonomous platforms for more efficient transmission to shore via satellite communications.

Methodology:

The student will make use of data from multiple oceanographic themes and technology settings to assemble classification ontologies / vocabularies, machine learning training libraries, handling of real time and/or delayed mode image data for annotation, the formation of derived data products and information, and quality control / quality assurance approaches. A key aspect will include the refinement of machine vision algorithms to maximise annotation skill. Technology can include the use of holographic images, standard photographs, as well as stereo and light-field (plenoptic) camera images. A method such as the ‘bag of words’ model will be employed here for image retrievals and annotations. In this framework, algorithms such as the scale invariant feature transform (SIFT) are exploited to extract features from the acquired images. System training is required as a pre-processing stage to generate a “dictionary” consisting of various regions of interests such as seafloor cracks, rocky outcrops, and so on inside the images. Platforms will include, but are not limited to AUVs and ROVs. Data products can include the location of specific geophysical features such as rocky outcrops or seafloor cracks, submarine bubble plumes, to the size, type and location of marine snow and animals in the water column and on the seafloor.

Training:

The NEXUSS CDT provides state-of-the-art, highly experiential training in the application and development of cutting-edge Smart and Autonomous Observing Systems for the environmental sciences, alongside comprehensive personal and professional development. There will be extensive opportunities for students to expand their multi-disciplinary outlook through interactions with a wide network of academic, research and industrial / government / policy partners. The student will be registered at University of Southampton, and hosted at the National Oceanography Centre Southampton. Specific training will include:

Prospective students should have some background in computer science, either via courses or work experience. Knowledge of marine science is beneficial, but not required.

The student will receive training in computer vision, machine learning, Matlab and C/C++ programming. The student will also receive introductory training in oceanographic science and technology and have the opportunity to participate in one or more research expeditions using AUV or ROV technology.

Background Reading:

Schoening T, Bergmann M, Ontrup J, Taylor J, Dannheim J, et al. (2012) Semi-Automated Image Analysis for the Assessment of Megafaunal Densities at the Arctic Deep-Sea Observatory HAUSGARTEN. PLoS ONE 7(6): e38179. doi:10.1371/journal.pone.0038179.

Gorsky, L, et al. 2010. Digital zooplankton image analysis using the ZooScan integrated system. J. Plankt. Res. 32:285-303.

Sivic, J., Zisserman, A. 2009, Efficient Visual Search of Videos Cast as Text Retrieval, IEEE Transactions on Pattern Recognition, and Machine Intelligence, 31(4):591-605

Eligibility and how to apply:

To apply for this project, use the: apply for a NEXUSS CDT studentship

Privacy Settings