Linking error measures to model questions

Bas Jacobs*, Hilde Tobi, Geerten M. Hengeveld

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)


Models for forecasting various ecosystem properties have great potential that comes with a need for model validation. Before we can perform such validation, we need to define what it means for the model to perform well, which depends on the question being asked. Often, it seems easy to ignore the model question and take a standard well-known error measure for comparing the model to the available data. The question is whether this practice is adequate. Here, we defined different types of model-data mismatches that may be more or less relevant to different types of questions. We show that error measures differ in their sensitivity to the type of mismatch and robustness to sparse and noisy data. The results imply that a careful selection of error measures, using a clearly defined ecological question as a starting point, is vital to proper model evaluation. While we present our results as generally applicable to the validation of any type of forecasting model, we also illustrate them using cyanobacterial bloom modelling as a detailed example of a case where different questions could be asked of the same model.

Original languageEnglish
Article number110562
JournalEcological Modelling
Publication statusPublished - Jan 2024


  • Cyanobacterial blooms
  • Error measures
  • Goodness of fit
  • Model evaluation
  • Model fit
  • Research methodology
  • Validation


Dive into the research topics of 'Linking error measures to model questions'. Together they form a unique fingerprint.

Cite this