Abstract
Delphi studies are often conducted with the aim of achieving consensus or agreement among experts. However, many Delphi studies fail to offer a concise interpretation of the meaning of consensus or agreement. Whereas several statistical operationalizations of agreement exist, hardly any of these indices is used in Delphi studies. In this study, computer simulations were used to study different indices of agreement within different Delphi scenarios. A distinction was made between the indices of consensus (Demoivre index), agreement indices (e.g., Cohen's kappa and generalizations thereof), and association indices (e.g., Cronbach's alpha, intraclass correlation coefficient). Delphi scenarios were created by varying the number of objects, the number of experts, the distribution of object ratings, and the degree to which agreement increased between subsequent rounds. Each scenario consisted of three rounds and was replicated 1000 times. The simulation study showed that in the same data, different indices suggest different levels of agreement, and also, different levels of change of agreement between rounds. In applied Delphi studies, researchers should be more transparent regarding their choice of agreement index and report the value of the chosen index within every round as to provide insight into how the suggested agreement level has developed across rounds
Original language | English |
---|---|
Pages (from-to) | 1607-1614 |
Journal | Technological Forecasting and Social Change |
Volume | 80 |
Issue number | 8 |
DOIs | |
Publication status | Published - 2013 |
Keywords
- coefficient-alpha
- kappa-coefficient
- methodology
- consensus
- lessons
- requirements
- reliability
- prevalence
- management
- accuracy