TY - JOUR
T1 - Can convolutional neural networks support agronomic analysis of cereal–legume canopy cover dynamics?
AU - Kottelenberg, David
AU - Bastiaans, Lammert
AU - van Essen, Rick
AU - Kootstra, Gert
AU - Douma, Jacob C.
PY - 2026/3/1
Y1 - 2026/3/1
N2 - Context: Understanding crop–crop and crop–weed interactions is essential for designing overyielding and weed-suppressive intercropping systems. Measurements of canopy cover over time can provide insights into these interactions, but are labour-intensive to collect. Machine learning methods, specifically convolutional neural networks (CNNs), could automatically analyse cover of individual species from canopy cover photos, yet the quality of the cover assessment that is needed to study species interaction remains unclear. Objective: This study aimed to quantify competitive dynamics in cereal–faba bean intercrops based on canopy cover and assess CNN performance required for reliable analysis. Methods: We collected RGB images from cereal–faba bean intercrops varying in cereal species (barley, rye, triticale, wheat), triticale:faba bean mixing ratios (1:1, 1:3, 3:1), and spatial design (row or mixed). Canopy cover was manually annotated for 397 images, identifying cereal, faba bean, and weed classes. Four CNN models of varying complexity were trained, the simplest of which were used off-the-shelf. We compared qualitative patterns and Lotka–Volterra competition parameters between ground-truth and CNN-segmented data. Results: Ground-truth data revealed that rye was the most competitive cereal, and wheat the least, reflected in Lotka-Volterra intrinsic growth rate parameters. Separating cereals and legumes into rows and reducing the cereal proportion in intercrops decreased cereal competitiveness relative to faba bean, resulting in more even canopy cover and more symmetrical competition parameters between species. All CNN models achieved high accuracy (Intersection over Union (IoU) = 0.900–0.926). While CNN-based segmentations matched ground-truth patterns visually, only our most complex model came close to the ground-truth parameter estimates, whereas the other three produced values too uncertain or biased to support the same conclusions. Conclusion: We conclude that moderate-complexity CNN models are sufficient to qualitatively interpret cover trends, but for more refined ecological analysis more complex CNNs are needed. Sensitivity analysis could aid in quantifying the performance needed before training such a complex CNN.
AB - Context: Understanding crop–crop and crop–weed interactions is essential for designing overyielding and weed-suppressive intercropping systems. Measurements of canopy cover over time can provide insights into these interactions, but are labour-intensive to collect. Machine learning methods, specifically convolutional neural networks (CNNs), could automatically analyse cover of individual species from canopy cover photos, yet the quality of the cover assessment that is needed to study species interaction remains unclear. Objective: This study aimed to quantify competitive dynamics in cereal–faba bean intercrops based on canopy cover and assess CNN performance required for reliable analysis. Methods: We collected RGB images from cereal–faba bean intercrops varying in cereal species (barley, rye, triticale, wheat), triticale:faba bean mixing ratios (1:1, 1:3, 3:1), and spatial design (row or mixed). Canopy cover was manually annotated for 397 images, identifying cereal, faba bean, and weed classes. Four CNN models of varying complexity were trained, the simplest of which were used off-the-shelf. We compared qualitative patterns and Lotka–Volterra competition parameters between ground-truth and CNN-segmented data. Results: Ground-truth data revealed that rye was the most competitive cereal, and wheat the least, reflected in Lotka-Volterra intrinsic growth rate parameters. Separating cereals and legumes into rows and reducing the cereal proportion in intercrops decreased cereal competitiveness relative to faba bean, resulting in more even canopy cover and more symmetrical competition parameters between species. All CNN models achieved high accuracy (Intersection over Union (IoU) = 0.900–0.926). While CNN-based segmentations matched ground-truth patterns visually, only our most complex model came close to the ground-truth parameter estimates, whereas the other three produced values too uncertain or biased to support the same conclusions. Conclusion: We conclude that moderate-complexity CNN models are sufficient to qualitatively interpret cover trends, but for more refined ecological analysis more complex CNNs are needed. Sensitivity analysis could aid in quantifying the performance needed before training such a complex CNN.
KW - Canopy cover
KW - Cereal
KW - Competition
KW - Convolutional neural network
KW - Intercropping
KW - Legume
KW - Lotka-Volterra
U2 - 10.1016/j.fcr.2025.110236
DO - 10.1016/j.fcr.2025.110236
M3 - Article
AN - SCOPUS:105022471881
SN - 0378-4290
VL - 337
JO - Field Crops Research
JF - Field Crops Research
M1 - 110236
ER -