Plant-part segmentation using deep learning and multi-view vision

Weinan Shi, Rick van de Zedde, Huanyu Jiang, Gert Kootstra

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

To accelerate the understanding of the relationship between genotype and phenotype, plant scientists and plant breeders are looking for more advanced phenotyping systems that provide more detailed phenotypic information about plants. Most current systems provide information on the whole-plant level and not on the level of specific plant parts such as leaves, nodes and stems. Computer vision provides possibilities to extract information from plant parts from images. However, the segmentation of plant parts is a challenging problem, due to the inherent variation in appearance and shape of natural objects. In this paper, deep-learning methods are proposed to deal with this variation. Moreover, a multi-view approach is taken that allows the integration of information from the two-dimensional (2D) images into a three-dimensional (3D) point-cloud model of the plant. Specifically, a fully convolutional network (FCN) and a masked R-CNN (region-based convolutional neural network) were used for semantic and instance segmentation on the 2D images. The different viewpoints were then combined to segment the 3D point cloud. The performance of the 2D and multi-view approaches was evaluated on tomato seedling plants. Our results show that the integration of information in 3D outperforms the 2D approach, because errors in 2D are not persistent for the different viewpoints and can therefore be overcome in 3D.

Original languageEnglish
Pages (from-to)81-95
Number of pages15
JournalBiosystems Engineering
Volume187
DOIs
Publication statusPublished - Nov 2019

Fingerprint

segmentation
Computer vision
plant anatomy
Information systems
learning
Semantics
Learning
Neural networks
phenotype
plant breeders
computer vision
neural networks
Deep learning
tomatoes
Plant Extracts
Lycopersicon esculentum
Seedlings
Information Systems
stems
seedlings

Keywords

  • 2D images and 3D point clouds
  • digital plant phenotyping
  • instance segmentation
  • semantic segmentation

Cite this

@article{26300b9a51404f3f933012db59a3481a,
title = "Plant-part segmentation using deep learning and multi-view vision",
abstract = "To accelerate the understanding of the relationship between genotype and phenotype, plant scientists and plant breeders are looking for more advanced phenotyping systems that provide more detailed phenotypic information about plants. Most current systems provide information on the whole-plant level and not on the level of specific plant parts such as leaves, nodes and stems. Computer vision provides possibilities to extract information from plant parts from images. However, the segmentation of plant parts is a challenging problem, due to the inherent variation in appearance and shape of natural objects. In this paper, deep-learning methods are proposed to deal with this variation. Moreover, a multi-view approach is taken that allows the integration of information from the two-dimensional (2D) images into a three-dimensional (3D) point-cloud model of the plant. Specifically, a fully convolutional network (FCN) and a masked R-CNN (region-based convolutional neural network) were used for semantic and instance segmentation on the 2D images. The different viewpoints were then combined to segment the 3D point cloud. The performance of the 2D and multi-view approaches was evaluated on tomato seedling plants. Our results show that the integration of information in 3D outperforms the 2D approach, because errors in 2D are not persistent for the different viewpoints and can therefore be overcome in 3D.",
keywords = "2D images and 3D point clouds, digital plant phenotyping, instance segmentation, semantic segmentation",
author = "Weinan Shi and {van de Zedde}, Rick and Huanyu Jiang and Gert Kootstra",
year = "2019",
month = "11",
doi = "10.1016/j.biosystemseng.2019.08.014",
language = "English",
volume = "187",
pages = "81--95",
journal = "Biosystems Engineering",
issn = "1537-5110",
publisher = "Elsevier",

}

Plant-part segmentation using deep learning and multi-view vision. / Shi, Weinan; van de Zedde, Rick; Jiang, Huanyu; Kootstra, Gert.

In: Biosystems Engineering, Vol. 187, 11.2019, p. 81-95.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Plant-part segmentation using deep learning and multi-view vision

AU - Shi, Weinan

AU - van de Zedde, Rick

AU - Jiang, Huanyu

AU - Kootstra, Gert

PY - 2019/11

Y1 - 2019/11

N2 - To accelerate the understanding of the relationship between genotype and phenotype, plant scientists and plant breeders are looking for more advanced phenotyping systems that provide more detailed phenotypic information about plants. Most current systems provide information on the whole-plant level and not on the level of specific plant parts such as leaves, nodes and stems. Computer vision provides possibilities to extract information from plant parts from images. However, the segmentation of plant parts is a challenging problem, due to the inherent variation in appearance and shape of natural objects. In this paper, deep-learning methods are proposed to deal with this variation. Moreover, a multi-view approach is taken that allows the integration of information from the two-dimensional (2D) images into a three-dimensional (3D) point-cloud model of the plant. Specifically, a fully convolutional network (FCN) and a masked R-CNN (region-based convolutional neural network) were used for semantic and instance segmentation on the 2D images. The different viewpoints were then combined to segment the 3D point cloud. The performance of the 2D and multi-view approaches was evaluated on tomato seedling plants. Our results show that the integration of information in 3D outperforms the 2D approach, because errors in 2D are not persistent for the different viewpoints and can therefore be overcome in 3D.

AB - To accelerate the understanding of the relationship between genotype and phenotype, plant scientists and plant breeders are looking for more advanced phenotyping systems that provide more detailed phenotypic information about plants. Most current systems provide information on the whole-plant level and not on the level of specific plant parts such as leaves, nodes and stems. Computer vision provides possibilities to extract information from plant parts from images. However, the segmentation of plant parts is a challenging problem, due to the inherent variation in appearance and shape of natural objects. In this paper, deep-learning methods are proposed to deal with this variation. Moreover, a multi-view approach is taken that allows the integration of information from the two-dimensional (2D) images into a three-dimensional (3D) point-cloud model of the plant. Specifically, a fully convolutional network (FCN) and a masked R-CNN (region-based convolutional neural network) were used for semantic and instance segmentation on the 2D images. The different viewpoints were then combined to segment the 3D point cloud. The performance of the 2D and multi-view approaches was evaluated on tomato seedling plants. Our results show that the integration of information in 3D outperforms the 2D approach, because errors in 2D are not persistent for the different viewpoints and can therefore be overcome in 3D.

KW - 2D images and 3D point clouds

KW - digital plant phenotyping

KW - instance segmentation

KW - semantic segmentation

U2 - 10.1016/j.biosystemseng.2019.08.014

DO - 10.1016/j.biosystemseng.2019.08.014

M3 - Article

VL - 187

SP - 81

EP - 95

JO - Biosystems Engineering

JF - Biosystems Engineering

SN - 1537-5110

ER -