Description
Artificial intelligence (AI) is increasingly used in high-stakes decision-making such as food safety. Deep learning is becoming the standard in AI for food safety applications. This approach involves neural networks with numerous interconnected layers that have nonlinear relationships. Even if we try to understand and describe these layers and connections, it is unfeasible to fully grasp how the neural etwork makes its decisions. Therefore, deep learning is often considered a “black box.” There are concerns that these black boxes may have unnoticed biases, potentially resulting in serious consequences in high-risk decision-making. To address this, there is an increasing demand for methods to enhance our understanding the black box, often referred to as explainable artificial intelligence (XAI). In this presentation, Bas van der Velden will cover the fundamentals of explainable AI and explore various use cases of XAI in the field of food safety.Period | 23 Oct 2024 → 24 Oct 2024 |
---|---|
Event title | Data Readiness on Artificial intelligence |
Event type | Conference/symposium |
Location | Parma, ItalyShow on map |
Degree of Recognition | International |
Documents & Links
Related content
-
Projects
-
EU-23034 ECOREADY (KB-50-005-008)
Project: LVVN project
-
EU-22021 HOLiFOOD (KB-50-005-006)
Project: LVVN project
-
EU-23033 EFRA (KB-50-005-007)
Project: LVVN project
-
Small Innovative projects (KB-38-001-029)
Project: LVVN project