Canadian machine learning researchers from the University of Victoria have teamed up with government marine biologists and private remote sensing specialists to develop a system for improved detection and classification of schools of herring.
The world’s oceans are home to some 200,000 species of sea animals, including over 18,000 species of fish, more than 1,800 sea stars, 816 squids, 93 whales and dolphins and 8,900 clams and other bivalves, according to a 2015 report from the World Register of Marine Species.
Ocean fishes come in a variety of shapes, sizes, and colors and live in many different depth and temperature environments. This diverse marine world is however under threat. A 2016 United Nations Food and Agriculture Organization’s World Fisheries and Aquaculture report reveals that 89.5 percent of the world’s fish stocks are either fully fished (catches are close to the maximum sustainable yield) or overfished (catches are unsustainable).
Ensuring the sustainability of fisheries production is crucial to people’s livelihoods, food security and nutrition and is a major priority for the Canadian Department of Fisheries and Oceans (DFO).
DFO biologists enlisted the help of computer engineers from the Computer Vision Research Laboratory at the University of Victoria (UVic) and remote sensing specialists and acousticians at Victoria-based environmental monitoring solutions company ASL Environmental Sciences to explore ways to speed up underwater species detection with AI technologies.
The collaboration produced a deep learning framework for the automatic detection of schools of herring from echograms that outperforms traditional hand-crafted machine learning algorithms on precision, recall, and F1-score metrics.
“Working on schools of herring was just a proof of concept and we are exploring the expansion of the detection capabilities to other species,” Alireza Rezvanifar, an ML researcher at UVic and the co-first author of the project paper A Deep Learning-based Framework for the Detection of Schools of Herring in Echograms told Synced in an email.
Modern acoustic fish detection involves locating and identifying schools of fish through the use of acoustic instruments such as echosounders. ASL has been building acoustic backscatter echosounders for years, and in 2012 introduced the Acoustic Zooplankton Fish Profiler (AZFP), now widely used by government and academic researchers. Because different fish species possess different physical properties they produce different responses to echosounders, which can visualize data as 2D images — usually echograms.
Although the echogram data is out there, prior to this project not a lot had been done to process it.
Rezvanifar explains that interpreting echograms is a time-consuming process that has typically been done manually by experts — or at best semi-automatically — and is prone to inconsistencies. Moreover researchers need to either develop in-house software or use expensive third-party tools such as Echoview to process and analyze their data.
The proposed deep learning framework improves the interpretation of massive raw acoustic data using two main components: a novel region of interest (ROI) extractor and a deep learning-based image classifier.
The researchers say a short-term goal is to expand the framework’s data processing ability to additional species such as juvenile salmon and zooplankton. They also see long-term potential in detection of suspended sediments, ocean turbulence, oil in water, and even the effects of water temperature shifts caused by climate change-related phenomena.
All that’s needed is the annotated data.
The UVic researchers used 100 echograms, 70 for training and 30 for testing. While that was representative enough for this study, Rezvanifar says “moving forward, we will need a substantially larger dataset to train more complex systems, such as end-to-end detection frameworks that do not require finding regions of interest within echograms as a first step like in this study.”
Another potential challenge lies in this study’s hybrid approach — the ROI extractor first finds regions in the echogram that have a high likelihood of containing a school of herring based on classical features and computer vision techniques, and these candidate regions are then classified using species-specific trained support vector machines or convolutional neural networks. This limits the scalability of the framework to other species.
Rezvanifar and his colleagues are exploring the possibility of substituting the ROI extractor with an end-to-end deep learning-based framework to streamline the development of new species detection.
The paper A Deep Learning-based Framework for the Detection of Schools of Herring in Echograms is on arXiv. It will be presented at the “Tackling Climate Change with Machine Learning” Workshop next month at NeurIPS 2019 in Vancouver.
Journalist: Yuan Yuan | Editor: Michael Sarazen