Following discussion of GSIM 2.0 at the CES Bureau meeting in February, a paper for discussion at the CES plenary session (in June) has been drafted.
The organization of the ModernStats Word Workshop in October is moving forward. The call for abstracts and information note has been published here. The workshop is going to focus on the use of standards and tools to improve interoperability, transparency and metadata-driven pipelines with the aim at sketching the future of statistical production beyond 2025.
UNECE contributed to the COSMOS conference on 11-12 April, related to smart metadata. UNECE presented a poster, with InKyung being part of its scientific committee. The topics discussed provide a useful basis for further consideration at the ModernStats World Workshop
There was also a useful meeting on April 10th in Paris right before COSMOS. The morning discussion was focused on ways in which SDMX and DDI are complimentary with each other, and the afternoon was dedicated to learn about transformation and validations languages (VTL/SDTL/SDTH) and how they can be used in the context of automated pipelines. As well as exchanging insights about new developments (SDTL/SDTH and VTL-DDI interoperability), it was a good opportunity for DDI and SDMX experts to share ideas on this topic. There was an agreement to draft a brief note suggesting a favoured approach to take to address interoperability of SDMX and DDI.
Work on the revision of GSBPM is continuing as planned, currently examining feedback received on the Design phase of GSBPM.
Work on finalizing the SDMX-DDI-GSBPM report is close to completion, but is proceeding more slowly than anticipated.
We are still seeking a leader for the activity on Common Statistical Data Architecture (CSDA) – Suggestions are appreciated!
Last but not least, we also submitted a proposal to the 65th ISI World Statistics Congress 2025 in The Hague. The proposed session will discuss how implementation standards can be used together with conceptual ModernStats models to improve interoperability at technical, semantic and organizational levels, and how they can be leveraged to build statistical production pipelines that are metadata-driven, semantically consistent and reusable.