AI Industrial AI Research

Semantic Segmentation Boosts Kiwifruit-Harvesting Robot Performance

Facing a shortage of seasonal workers and rising labour costs, kiwifruit growers may get some relief from robots equipped with a new AI-powered fruit detection system.

Facing a shortage of seasonal workers and rising labour costs, kiwifruit growers may get some relief from robots equipped with a new AI-powered fruit detection system.

Traditional human-based kiwifruit picking is a tedious and repetitive job. It’s also unhealthy — Kiwi pickers must carry heavy picking bags, which can cause back strain and lead to more serious musculoskeletal problems.
These issues have increased interest in advanced autonomous harvesting robots, which can reduce labour costs while also increasing harvested fruit quality. Accurate and reliable kiwifruit detection is one of the biggest challenges faced by orchard fruit-harvesting robots, whose computer vision systems must deal with dynamic lighting conditions, fruit obstructions, etc.

截屏2020-06-30 下午3.29.39.png

A team of researchers from the University of Auckland’s Centre for Automation and Robotic Engineering Science recently introduced a semantic segmentation method to meet these challenges. The system employs two novel image simulation techniques aimed at detecting kiwifruit and is shown to work efficiently under the changing and often harsh lighting conditions found in orchard canopies.

The researchers integrated a preprocessing method for different lighting conditions to improve system performance. On overexposed images and glare images, it will apply histogram equalization (HE) to reduce dynamic lighting conditions. In images with problematic glare the intensity changes dynamically, and so HE is applied to each sub-image. In the case of over-exposed images, HE is used for the entire frame.

截屏2020-06-30 下午3.17.49.png
Examples of (a) glare image, (b) its blue channel, (c) its green channel, and (d) its red channel
image
Example of occluded calyx by (a) branch, (b) leaf, (c) wire, (d) fruit, (e) post, and (f) support beam
截屏2020-06-30 下午3.16.47.png
Overall performance of the detection method on the kiwifruit occlusion dataset
截屏2020-06-30 下午3.17.01.png
Detection method performance on occluded and non-occluded kiwifruit

The performance of the University of Auckland’s method was evaluated on a 3D real-world kiwifruit image set under a variety of different lighting conditions and fruit occlusion scenarios based on F1 accuracy score and processing time.

The semantic segmentation method alone obtained an F1 score of 0.82 on a typical lighting image set, but struggled under harsh lighting with an F1 score of just 0.13. With the application of the proposed preprocessing method, visual system performance under harsh lighting improved to an F1 score 0.42. In the case of fruit occlusion, the method was able to detect 87.0 percent of uncovered kiwifruit and 30.0 percent of covered kiwifruit under all lighting conditions.

The paper Kiwifruit Detection in Challenging Conditions is on arXiv.


Author: Xuehan Wang | Editor: Michael Sarazen & Fangyu Cai

0 comments on “Semantic Segmentation Boosts Kiwifruit-Harvesting Robot Performance

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: