Digital twins critically depend on the availability of high-quality data. They need to monitor the real-world entities in order to learn from (improve their internal models) and guide these entities. This means that the data must be available at the right moment, for example as a continuous stream in real time, but also in a way that it is understood by the software that realizes the digital twin. This requires a proper technical infrastructure for sharing data, which is able to express the meaning and provenance of the data.
However, in practice data is often ill captured and stored in data silos only usable for one specific purpose. Data can come from public or proprietary repositories, from sensors, from unstructured sources such as publications, etc. Due to the inherent complexity of different data sources, it often requires manual work and negotiation to reach the data and to transform it into a format that a digital twin can consume. To overcome these issues, there is a need in guidance and standardisation for data acquisition, transfer, interpretation and processing.
The first objective of this project is to explore different approaches to realizing either data streams or data trains that feed and result from digital twins. Data streams are flows of data that can be processed by interested 'consumers'. In data trains a 'consumer travels through a data station, processes some data locally and only takes along the result.
The second objective is to develop methods for expressing data and metadata, reusing existing (semantic) standards where possible. New approaches to both questions are to be evaluated in use cases from current WUR digital twin flagships.
|Effective start/end date||1/01/20 → 31/12/20|