Abstract
The present dataset comprises a collection of RGB-D apple tree images that can be used to train and test computer vision-based fruit detection and sizing methods. This dataset encompasses two distinct sets of data obtained from a Fuji and an Elstar apple orchards. The Fuji apple orchard sub-set consists of 3925 RGB-D images containing a total of 15,335 apples annotated with both modal and amodal apple segmentation masks. Modal masks denote the visible portions of the apples, whereas amodal masks encompass both visible and occluded apple regions. Notably, this dataset is the first public resource to incorporate on-tree fruit amodal masks. This pioneering inclusion addresses a critical gap in existing datasets, enabling the development of robust automatic fruit sizing methods and accurate fruit visibility estimation, particularly in the presence of partial occlusions. Besides the fruit segmentation masks, the dataset also includes the fruit size (calliper) ground truth for each annotated apple. The second sub-set comprises 2731 RGB-D images capturing five Elstar apple trees at four distinct growth stages. This sub-set includes mean diameter information for each tree at every growth stage and serves as a valuable resource for evaluating fruit sizing methods trained with the first sub-set. The present data was employed in the research paper titled “Looking behind occlusions: a study on amodal segmentation for robust on-tree apple fruit size estimation” [1].
Original language | English |
---|---|
Article number | 110000 |
Journal | Data in Brief |
Volume | 52 |
DOIs | |
Publication status | Published - Feb 2024 |
Keywords
- Agricultural robotics
- Amodal segmentation
- Depth image
- Fruit measurement
- Fruit visibility
- Instance segmentation
- Modal segmentation
- Yield prediction