DiSSCo Demo VI 10-09-2025

DiSSCo demos are back! It has been a while, and the team has gone through some changes, but that didn’t mean that development has halted. In fact, DiSSCo has hit an impressive milestone of over 5 million DOI’s minted. In additional to filling our production environment, we have some new and existing features to present.

DiSSCo Transition Project has officially ended, and one of the goals of DiSSCo was to deploy a first production version of the Digital Specimen Architecture (DSArch). With the release of a DiSSCo Minimal Viable Product (MVP), we are now able to start minting DOI’s for specimen and digital media. We are incredibly excited to share with you that we have recently hit the five million mark. For more information, please see our previous blog post.

Besides the DOI’s, we went over a number of other topics. Melanie de Leeuw gave us a presentation on the work she and Mathilde Lescanne are doing with the Naturalis user group. Their first focus is on improving the taxonomic annotation workflow. Together with Naturalis collection managers and taxonomic experts, they are testing and improving how experts can add new or modify existing taxonomic identifications through our DiSSCover interface. This is an excellent first step in onboarding our first experts, and we hope to inform you on the progress in our next session.

The next topic involved DiSSCo’s activity as early adaptor of the new Darwin Core Conceptual model and the associated Darwin Core Data Package (DwC-DP). DiSSCo has provided input for this new data standard and is at the forefront of implementation. Any data in DiSSCo will be available for download in the new DwC-DP. In addition, we also implemented an option to download any dataset in the current standard, the Darwin Core Archive (DwCA). Please be aware that the Darwin Core Conceptual Model will soon be in open review.

As the last topic, we wanted to show a new integration with a Machine Annotation Service (MAS). After community consultation we recognised VoucherVision, developed by William Weaver, as one of the leading tools for label transcription. As label transcription is an often manual time-consuming task vital to the digitisation of the specimen, we wanted to experiment if VoucherVision could help us. By integrating VoucherVision as a MAS in DiSSCo we are now able to run it on any specimen in our acceptance infrastructure.

We concluded the demonstrations with 15 minutes of questions. We would like to thank all participants and hope to see you at our next demo!

The following topics were presented in the demo:

  • DiSSCo at past 5 million DOI’s
  • User group starting on Taxonomic annotations through DiSSCover
  • DwCA and DwC-DP export products from DiSSCo
  • Voucher Vision integration in DiSSCo Sandbox

Looking forward to our next demo, which will be held in December, we hope to show the following topics:

  • Ongoing improvements to data ingestion
    • Move to 20 million DOIs
  • Further improvements to annotation flow in DiSSCover
    • Start testing with first experts
  • MAS scheduling on ingestion
  • (public) Virtual Collections (TETTRIs deliverable)
  • DWCA and DwC-DP available as export in DiSSCover
  • Refine annotation evaluation flow
  • Image (derivatives) storage in DiSSCo
  • Support integration with ELViS
  • Switch underlying Identity and Access Management Infrastructure

Slides from the presentation can be downloaded here: https://docs.google.com/presentation/d/1bFm7iM1NJ72bomsx6FJJ48q6IGt_ZXhGjG1Ca9GvQIA/edit?usp=sharing

Five million digital specimens and images in DiSSCo: millions more to come

Authors: Sharif Islam, Wouter Addink, Soulaine Theocharides, and  Sam Leeflang. 

We are proud to announce a major milestone: over five million digital specimens and digital media objects are now available in DiSSCo. You can see them right now on our DiSSCover platform! These resources are now live and can be referenced through their DOIs (Digital Object Identifiers). 

This achievement marks the culmination of years of effort from the DiSSCo Prepare and DiSSCo Transition projects, resulting in a Minimum Viable Product that begins to demonstrate the power of the digital specimen. Special thanks to the DiSSCo technical development team for their sustained efforts in developing, integrating, and maintaining the services that made this milestone possible.

Screenshot of DiSSCover portal from Aug 19, 2025
Screenshot of DiSSCover portal. Aug 19, 2025

Why this matters:

Citability: Using DiSSCo’s DOI, researchers can unambiguously reference specific specimens in publications. These DOIs are globally unique and assigned to every digital specimen and media item in DiSSCover. These identifiers resolve to the landing page on DiSSCover, and contain additional metadata for machine actionability. Note that there is now a specific field for this in Darwin Core data standard called digitalSpecimenID.

Provenance: DiSSCo captures all changes related to a specimen or media object. Using DiSSCo’s versioning capability, you can go back in time and see how data has changed. 

Community Curation: DiSSCover’s primary feature is community annotation and curation.  Users can annotate data in DiSSCover with the help of AI, allowing a community of experts to address data gaps and quality issues quickly in any specimen collection available on the platform. The more specimens that have a digital presence, the greater the experts’ reach. 

How does it work:

Each specimen and media object in DiSSCo is a FAIR Digital Object (FDO). This means it is more than just a specimen or media. It’s a machine-actionable object with a persistent Identifier, structured metadata, and a transparent provenance. These identifiers and their corresponding metadata (known as FDO Records) enable reliable referencing, citation, and integration into research workflows across disciplines. The objects form a digital surrogate of the physical specimen, encompassing not only collection event details, taxonomic information but also measurements, assessments, and connections to derived datasets. 

This infrastructure is underpinned by:

  • A Technical Readiness Level 6 (TRL6) Minimum Viable Product of core infrastructure (see more details about the technical design and background in DiSSCo Prepare milestone output)
  • The openDS specification for standardising digital specimen data representation 
  • FDO Profiles for consistent, type-aware metadata
  • A robust Persistent Identifier (PID) infrastructure providing DOIs through a collaboration with DataCite
Screenshot captured from DataCite Commons. Aug 19, 2025
These digital objects are also findable in the DataCite Commons. Screenshot captured, Aug 19, 2025

Together, these components bring DiSSCo closer to its vision of a pan-European, open, and interoperable research infrastructure for natural science collections, mobilising FAIR biological and geological specimen data. 

What’s next?

In the next phase of development toward full operation we aim to grow to 20 million combined digital specimens and media. With 500 million specimens in the CETAF institutions in Europe alone, from which only a fraction is digitised, we know we have a long way to go but we are ready for the challenge.

Getting Started with Digital Specimen DOIs: Unlocking the Power of Persistent, Citable Data

In biodiversity and environmental sciences, access to trusted, persistent, and citable data is key to accelerating research and discovery. That’s where Digital Specimen DOIs come in—a foundational tool for connecting your research to verifiable specimen data. Darwin Core supports them through a new term called “digitalSpecimenID” and Pensoft supports them in their ARPHA journal system. But what exactly are they—and how should you use them?

What is a Digital Specimen DOI?

A Digital Specimen is versioned dataset providing a digital representation of a physical specimen (like a herbarium sheet, insect, or fossil) held in a natural science collection. It can be annotated, curated and extended with new data. A DOI (Digital Object Identifier) is a globally unique, persistent identifier that allows that specimen to be reliably referenced, cited, and accessed online.

Think of it as a permanent web address that always points to a specific digital specimen record—no matter where it’s hosted or how systems evolve. Digital Specimen DOIs allow you to:

🔗 Unambiguously reference specimens in publications and datasets.

🧬 Link your research outputs to the exact specimens used in your work.

📊 Track provenance, usage, and impact of specimen-based data over time.

Why Use DOIs for Specimens?

Citable – You can cite specimens like research papers, improving credit for collectors, curators, and institutions.

🔍 Findable – DOIs are indexed by metadata aggregators such as DataCite and CrossRef, making your data easier to discover.

🔄 Interoperable – They help link specimen records to DNA sequences, publications, images, annotations, and more.

🛠️ Machine-Actionable – DOIs support automation and integration in digital workflows, critical for large-scale biodiversity research.

🔒 Persistent – Even if websites change, DOIs always point to the right data, helping ensure long-term accessibility.

Why Researchers Should Care

🧾 Cite specimens directly – Credit the physical evidence behind your research.

🔗 Integrate with nanopublications – Reference fine-grained data assertions (e.g., “Specimen X was observed with Trait Y in Location Z”) with full provenance.

📡 Enable reproducibility – Make your data and methods traceable and verifiable.

🧠 Power knowledge graphs – Link specimens to genes, traits, publications, and ecological observations.

🔄 Reuse with confidence – Know exactly where data came from and how it’s been used.

The advantages of DOIs

🔍 Findable – DOIs are indexed by metadata aggregators such as DataCite and CrossRef, making your data easier to discover and making your data connected in scholarly knowledge graphs.

🛠️ Machine-Actionable – DOIs support automation and integration in digital workflows, critical for large-scale biodiversity research. Digital Specimen DOIs are designed to always provide metadata and a machine readable version of the data (JSON) in addition to a human readable HTML page.

🔒 Persistent – Even if websites change, DOIs always point to the right data, helping ensure long-term accessibility. It is the mission of the DOI foundation and its registration agencies to ensure the resolvability of the DOIs.

How Do They Work?

Behind the scenes, a DOI resolves via the Handle System, it links to a Handle record that automatically redirects to a HTML landing page for the specimen. The link to that landing page is included in the Handle record, so it can be updated when the URL for that landing page changes. DataCite provides DiSSCo with a prefix (currently only 10.3535) which DiSSCo uses to issue DOIs when digital specimens are published, and DiSSCo provides DataCite with the DOI metadata.

But a Digital Specimen DOI is designed to do much more than that, making use of advanced capabilities of the Handle system. it allows websites to provide metadata like the physical specimen catalog number as context to a user, with a tooltip as demonstrated here. It allows a user to directly go to a specimen catalog record if it exists online, using a locatt suffix: https://doi.org/10.3535/EF9-THJ-D7S?locatt=view:catalog. The same can be used to go directly to a JSON representation: https://doi.org/10.3535/EF9-THJ-D7S?locatt=view:json. Pretty cool. And while the DOI by default redirects to the latest version of the digital specimen, it is also possible to refer to a specific version by appending the version number, for example for version 2: https://doi.org/10.3535/EF9-THJ-D7S?urlappend=/2.

Ready to Get Started?

If you’re managing specimens or working with specimen data, consider using the digital specimen DOIs. It’s a small step with a big impact—for you, your data, and the global research community.

👉 Tip: Include digital specimen DOIs in your methods sections, data citations, and supplementary materials for greater impact and reproducibility. For an example, see this publication: https://doi.org/10.3897/BDJ.12.e129438.

Always include DOIs (and any other persistent identifiers) with the full URL, so including “https://doi.org/”. If that is not possible, use “doi:” instead. Note that the specimen images also have a DOI that you can use. The digital specimen will give an example of how to cite it, for example:

🔗Naturalis Biodiversity Center (2025). Animalia. Distributed System of Scientific Collections. [Dataset]. https://doi.org/10.3535/5BP-R3Z-6K2.

If you publish your data in Darwin Core, include the DOI using the (new) term digitalSpecimenID. This allows aggregators like GBIF to link to the digital specimen.

Check if your data sources provide DOIs for digital specimens. If not, encourage data providers to adopt them—many already integrate DOI assignment as part of digitization workflows.

Nanopublications: Going One Step Further

For researchers working at the frontier of semantic data publishing, nanopublications offer a way to formally express and cite individual assertions—like taxonomic identifications, ecological observations, or annotations on a specimen.

When combined with Digital Specimen DOIs, nanopublications allow you to:

  • Reference atomic facts with full provenance.
  • Enable machine-readable assertions.
  • Support automated reasoning across data networks.

Learn more at nanopub.org and explore how to publish assertions connected to digital specimens or look at https://doi.org/10.3897/BDJ.12.e129438 for an example. Let’s take the Assertion: “Specimen RMNH.ARA.18251 is Phintelloides scandens sp. nov.”

A Nanopub enhances the semantic precision of this assertion by including author credit and provenance metadata, and provides a machine-readable format enabling automated reasoning.

Persistent data builds lasting knowledge. Start leveraging Digital Specimen DOIs today—your future self (and your citations) will thank you.

DiSSCo Demo V 23-10-2024

As the DiSSCo Transition Project (DTP) is in full swing, the DiSSCo Development team will be giving regular (bi-monthly) updates on the progress. This progress will mainly focus on Task 3.1 – Further develop the piloted Digital Specimen Architecture (DSarch) into a Minimum Valuable Product (MVP). However, as the development team is also involved in other tasks within DTP, as well as other projects such as TETTRIs, we will also include updates on our work there.

The OpenDS public review closed mid-September, and we’d like to thank everyone who took the time to review and get involved. After sorting through over 50 comments and suggestions, we’re proud to release OpenDS 0.4, which you can explore on our Terms Site. The biggest changes from 0.3 are: the Identifier and Agent objects are more generic, we’ve aligned more with TDWG’s Audiovisual Core for media Objects, and we’ve refined use cases for specimens with multiple components – such as a rock specimen with multiple fossils. 

Why not call this a 1.0 version? We know there are still improvements to be made. Before releasing OpenDS 1.0, we want to align with the TDWG Mineralogy extension to further support geological specimens, publish our terms, and enforce controlled vocabularies where applicable. However, OpenDS 0.4 is stable enough that it will be the version of the DiSSCO MVP, which will be released later this year. 

OpenDS 0.4 isn’t the only new data model to be released this period. DiSSCo has also released a 1.0 version of its FDO profiles for: digital specimens, digital media, annotations, source systems, machine annotation services, and data mappings. FDO Profiles define the metadata for persistent identifier records. That’s useful for machines, who can use the records’ metadata as input to decide whether or not they need to resolve the PID at all, and what actions they can expect to perform on the object. 

DiSSCo may be minting DOIs for digital specimens and media, but it’s also important to consider how that information can get back into the source systems. In this demo, we discuss the technical architecture of the data exporter pipeline – from a user’s request to an email in their inbox. This workflow is designed to be flexible, supporting different kinds of data export jobs as use cases arise. 

And that’s not the only new feature in the works. We also saw a preview of the new annotation wizard on DiSSCover. OpenDS 0.4 is intended to be as flexible as possible to support many different use cases, but that means a user may have difficulty targeting the fields or classes they mean to. The annotation wizard guides users in the annotation making process, helping them select the intended target and auto-filling existing data. 

We concluded the demonstrations with 15 minutes of questions. We would like to thank all participants and hope to see you at our next demo in December!

The following topics were presented in the demo:

  • OpenDS review has ended and feedback has been incorporated into version 0.4.0
  • FDO profiles have been updated to version 1.0!
  • Produce auto-accepted annotations when data gets created/updated
  • Entity relationships between digital specimen and -media
  • Annotation wizard in DiSSCover
  • TETTRIs Marketplace
  • Improved data model
  • Integrated form for adding taxonomic service

Looking forward to our next demo, which will be held in December, we hope to show the following topics:

  • Finalise implementation of openDS 0.4 in infrastructure
  • Continue frontend code improvement (annotation / MAS)
  • Documentation for supplying data/metadata
  • Finalise DiSSCo Data Exporter
  • Help MAS providers integrate with DiSSCo infrastructure (T3.2/MS16)
  • Move to Observability stack Naturalis for monitoring/logging/auditing
  • Improved logging/registration and integration with ORCID
  • Tombstoning specimen records
  • Extensive testing of MVP product
  • Vocabulary Server
  • Work towards TETTRIs Marketplace
  • Gather requirements and prioritise for after MVP (WP2.1/MS10 task)
    • Workshop 30 of October

DiSSCo Demo IV 28-08-2024

As the DiSSCo Transition Project (DTP) is in full swing, the DiSSCo Development team will be giving regularly (bi-monthly) updates on the progress. This progress will mainly focus on Task 3.1 – Further develop the piloted Digital Specimen Architecture (DSarch) into a Minimum Valuable Product (MVP). However, as the development team is also involved in other tasks within DTP, as well as other projects such as TETTRIs, we will also include updates on our work there.

With the openDS public review extended until the 10th of September, it is no surprise that we made this one of the main topics of this demo. We realise that the review of our seven proposed Digital Objects is no a small feat. This is why we presented some additional diagrams, provided information about our Tombstone philosophy and provide room for questions and discussion.

Next, we took a short moment to dig into the changes in DiSSCo’s handle storage. Several improvements have been made to secure sufficient scaling for the minting of Persistent Identifiers (PID) for our Digital Objects. The biggest change is to shift from a relational database (PostgreSQL) to a document store (MongoDB). To implement this, we depended on new functionality of the Local Handle Server, maintained by Corporation for National Research Initiatives (CNRI). CNRI implemented this change and DiSSCo will be the first to try scaling PIDs to new highest.

Although our frontend developer Tom Dijkema was on holiday, he was so kind to provide us with a pre-recorded demo of the changes he is making to DiSSCover. In the past months, we have been making significant changes in our frontend, both visible and invisible. This improves the responsiveness and maintainability of DiSSCover, ensuring we can add new functionality in the future. Work on the improvements continues, and we hope to show the full result in the next demo.

We concluded the demonstrations with 15 minutes of questions. We would like to thank all participants and hope to see you at our next demo on the 23rd of October between 11:00–12:00!

The following topics were presented in the demo:
openDS Terms documentation
– Data model changes implemented in infrastructure
– Improved Handle storage
– Updated MAS documentation
– First test of Naturalis Observability Stack
– Improved MAS support (support for encrypted secrets on deployment)
– Improvements in frontend code DiSSCover
– TETTRIs Marketplace

Looking forward to our next demo, which will be held on the 23rd of October, we hope to show the following topics:
– Incorporate review feedback and move openDS to a 1.0 version
– Finalise implementation of openDS 1.0 in infrastructure
– Continue frontend code improvement
– Produce annotations when data gets created/updated
– Documentation for supplying data/metadata
– Help MAS providers integrate with DiSSCo infrastructure (T3.2/MS16)
– Move to Observability stack Naturalis for monitoring/logging/auditing
– Tombstoning specimen records
– Improved logging/registration and integration with ORCID
– Extensive testing of MVP product
– Vocabulary Server
– TETTRIs Marketplace, incorporate feedback on prototype

Zoom link to demo session (October 23rd, 11:00h-12:00h CEST)
https://tinyurl.com/zoom-dissco-demo

openDS Public Review

openDS Public Review has been extended!

We are announcing an extension of the openDS public review until September 10th 2024. Due to holidays and other commitments, such as TDWG 2024, several members of our community have indicated that an extension will be welcome. We have decided to extend the review period to give some additional time for review. Please know that any comments or questions are welcome and can be left at the DiSSCo GitHub page.

You can find class and term definitions in our documentation sections. The full JSON schemas are also available in our GitHub repositories.

What is openDS?

openDS is a comprehensive data specification for “Digital Specimens” — digital surrogates of physical specimens with related information.

Key Goals:

  1. Support technical implementation of Digital Extended Specimens as described in the as described by Hardisty et al, 2022 paper.
  2. Enable an extensible network of biodiversity data records
  3. Comply with GBIF’s unified data model
  4. Create FAIR, machine-actionable, mutable data objects
  5. Standardise digital representation of natural science collection specimens
Digital Specimen Entity Relationship diagram

Current Focus and Future Plans:

  • Currently covers data shared through GBIF IPT and BioCASe installations
  • Future versions will include more specimen-related data (loans, permits, field notes, etc.)
  • Designed to capture specimen data throughout its lifecycle

Why Review Now?

DiSSCo plans to release its Digital Specimen infrastructure by the end of 2024/early 2025. We need your input to upgrade the openDS data model from version 0.3 to a community-accepted version 1.0.

How to Review:

  1. Navigate the documentation page for each class and term.
  2. Submit questions or comments via GitHub
  3. Include term names when providing feedback on specific terms
  4. Note that most terms come from existing standards, with some openDS-specific additions
  5. We aim to use controlled vocabularies where appropriate. These are not yet part of the documentation

Your feedback is crucial in shaping the future of digital specimen data management. Join us in this important review process!

Report from Davos World Biodiversity Forum 2024

The 3rd World Biodiversity Forum in Davos, Switzerland, brought together about 800 participants. The event covered diverse topics and discussions on the future of biodiversity, emphasising marine conservation, freshwater biodiversity, and climate change impacts. Discussions focused on data sharing, interdisciplinary approaches, and nature-based solutions to bridge global biodiversity monitoring gaps through trans-disciplinary practices and data initiatives.

Davos Congress Center, site of the World Biodiversity Forum

Key highlights from my participation and discussions:

  • The importance and centrality of data and modelling were mentioned during the opening plenary. Throughout the forum, there were several presentations focusing on generating decision-relevant information for the 2030 biodiversity goals, addressing gaps and biases in mobilising biodiversity data, identifying blind spots and overlaps in biodiversity indicators, and translating global goals into national actions using a specific framework. However, these discussions on data came with a caveat. Understanding the complex interplay of biodiversity, society, politics, and nature often requires a focus on local contexts.
  • While the concepts of “story” and “storytelling” can sometimes seem cliché, several talks underscored the importance of context and the need for fit-for-purpose data based on both global vision and local concerns. As we accelerate digitisation and monitoring efforts, a deluge of data is occurring with a need for contextual understanding. How do we get that context? We require methods to quantify the local significance of biodiversity crisis. One successful example that was shared was conservation efforts addressing human-wildlife conflict in Uganda, as highlighted here: WWF Uganda Initiative.
  • Naming and appreciating local flora and fauna is crucial in biodiversity. As we all know, taxonomy is an important aspect of biodiversity. To paraphrase one of the speakers, “nothing in biology makes sense if the organisms studied are not identified and named.” However, with all these complex discussions about the tree of life, phylogeny, subspeciation, and scientific objectivity, there is often a lack of focus on appreciating and understanding the flora and fauna around us. Several plenary members emphasised the importance of taking the time to “smell the roses.”
  • I presented FAIR data and modelling work in the session on Digital Twin applications aimed at fostering actions for biodiversity conservation. During this session, we explored how traditional conservation decisions typically rely on ground-truth observations and remote monitoring, sometimes supplemented by predictive models or experiments. However, these methods can be slow or insufficiently responsive to rapid environmental shifts, which can delay timely interventions.
    The Digital Twin paradigm offers a promising approach by enabling predictive modelling of ecosystems, thereby facilitating quicker adaptation to environmental changes. There is still progress to be made in biodiversity and ecology twinning, especially concerning the foundational integration of data and modelling. However, we were inspired by exciting new ideas shared by projects like BioDT, DTO-Bioflow, European Digital Twin Ocean-Blue Cloud 2026 cooperation, LTER-LIFE, and Crane Radar among others.
  • David Obura (current chair of IPBES) discussed the importance of simplifying our presentation and communication. For instance, he highlighted the relationship between nature, the economy, and society. While this concept is often complicated by charts and numbers, it can be effectively conveyed through a simplified image such as this:
  • Biodiversity Genomics: A dedicated session focused on the urgency of the Biodiversity Genomics Europe project. It addressed the challenges of standardisation (data and sequencing processes), community collaboration, and funding. Emphasis was placed on creating a toolkit-based approach to overcome logistical issues (such as shipment and permits) and legal considerations (such as Nagoya Protocol and Access and Benefit Sharing agreements) in biodiversity genomics. There was also an emphasis on not only technical knowledge (such as how to sample and sequence certain organisms) and data sharing (via GBIF, GenBank, BOLD) but also on improving how we document errors, findings, and challenges within the scientific community. This will allow for better communication of these aspects to our funders to showcase gaps and needs.
  • Visual Sketching: The forum included an interesting visual sketching output (Building pathways towards desirable futures for Arctic biodiversity– a design thinking workshop) that captured the essence of discussions in a creative and engaging manner.
  • Another interesting plenary presentation was by Michelle Lim, associate Professor of Law, Singapore Management University. She discussed the concept of the “endling” in her work, challenging typical legal terms that often obscure the harsh reality of extinction, such as “threatened” and “endangered”. An endling represents the last surviving individual of a species, symbolising the endpoint of biodiversity loss. Lim advocates for a paradigm shift in environmental law, urging recognition and emotional connection to extinction through narratives of endlings like the Christmas Island Forest Skink. She proposes a broader, pluralistic approach to environmental law that embraces “other-than-human” perspectives and utilises “fiction” (stories, narratives instead of just data) as a legal tool. This approach acknowledges diverse knowledge systems and promotes radical imagination in addressing global environmental challenges. You can explore her ideas further in these two papers (both published in academic legal journals): Extinction: hidden in plain sight – can stories of ‘the last’ unearth environmental law’s unspeakable truth? and Fiction as Legal Method

Even though not all aspects and regions were represented, overall, the World Biodiversity Forum 2024 provided a platform for exchanging ideas, addressing challenges, and envisioning a positive future for biodiversity through collaborative efforts and innovative approaches.

DiSSCo Demo III 19-06-2024

As the DiSSCo Transition Project (DTP) is in full swing, the DiSSCo Development team will be giving regularly (bi-monthly) updates on the progress. This progress will mainly focus on Task 3.1 – Further develop the piloted Digital Specimen Architecture (DSarch) into a Minimum Valuable Product (MVP). However, as the development team is also involved on other tasks within DTP, as well as other projects such as TETTRIs, we will also include updates on our work there.

This third demonstration of the progress of Task 3.1 focusses on two main topics. The first topic is batch annotation. In a pre-recorded demo, Soulaine Theocharides explains why batch annotation is vital to the success of DiSSCo. Time is valuable, whether the agent is a human or a machine. We don’t want to have any unnecessary double work. So if we can identify why one specimen was annotated, we could track all similar specimen which we can then also annotate. Developing batch annotation has posed many challenges, but in this demo we show that we are close to solving these and getting this functionality integrated in DiSSCo.

In the second part of the demo, we went over some data model changes. One of the most important parts to get right before launching the MVP is getting the data models right. The past months we made another step in getting all the models ready. We resolved open comments we had, added additional prefixes and reused more terms from existing vocabularies. The structure of schema.dissco.tech changed slightly as we moved the version number back one spot. To document the data models, we decided to reuse the work done by Ben Norton. This means we launched a new website dev.terms.dissco.tech in which we publish a human-readable version of our schemas, completely generated by the JSON schemas.

We concluded the demonstrations with 15 minutes of questions. We would like to thank all participants and hope to see you at our next demo on the 28th of August between 11:00–12:00.

The following topics were presented in the demo:
— Batch annotation for Machine Annotation Service
— New structure for schemas.dissco.tech online
— Restructuring of openDS datamodels and new versions
— openDS Terms documentation in progress (dev.terms.dissco.tech)
— Updated MIDS calculation (based on the SSSOM mapping developed by Mathias Dillen)
— Test run with DataCite, minted 1491 test DOIs
— (Draft) Linking service with BOLD EU
— End-user testing DiSSCover
— Improvements in frontend code DiSSCover
— In draft
— RFC for vocabulary server
— Authorization matrix
— Infrastructural upgrades
— DOI/Handle servers infrastructure-as-code
— Project outputs
MS14 Definition of a TRL6 Minimum Viable Product of core infrastructure
Updated services landscape overview
— TETTRIs
Marketplace – First implementation backend based on Cordra

Looking forward to our next demo, which will be held on the 28th of August, we hope to show the following topics:
— openDS Terms documentation and datamodel 1.0
— Implement data model changes through infrastructure
— Frontend code improvement
— Documentation for supplying data/metadata
— Improved Handle storage
— Move to Observability stack Naturalis for monitoring/logging/auditing
— Services uptime monitoring page
— Improved MAS support (support for encrypted secrets on deployment)
— Tombstoning specimen records
— Workshop and further implementation TETTRIs marketplace
— Plan for a Vocabulary Server
— Improved logging/registration and integration with ORCID

Zoom link to demo session (August 28th, 11:00h-12:00h CEST)
https://tinyurl.com/zoom-dissco-demo

DiSSCo Demo 24-04-2024

As the DiSSCo Transition Project (DTP) is in full swing, the DiSSCo Development team will be giving regularly updates on the progress. This progress will mainly focus on Task 3.1 – Further develop the piloted Digital Specimen Architecture (DSarch) into a minimum valuable product. However, as the development team is also involved on other tasks within DTP, as well as other projects such as BiCIKL and TETTRIs, we will also include updates on our work there.

The second demo session comes after an introductory first demo (available on dissco.tech). It focuses on the Minimum Viable Product of the Digital Specimen Data Infrastructure that needs to be delivered at the end of the DiSSCo Transition project (see http://www.dissco.eu for more information), going through topics such as the work that has been done for DiSSCo in the BiCIKL project, the first DiSSCo minted DOI’s, infrastructural upgrades, or perspectives for the future.

The following topics were presented in the demo:
— DOI DataCite infrastructure
— Early landing pages for Source Systems, Mappings and Machine Annotation Services
— First role based authorisation
— Integration of taxonomic resolution service
— Scheduling and automatic triggering of translator services
— Creation of the translator job record
— Reworked taxonomic filters
— Configurable Machine Annotation Service timeout
— Infrastructure upgrades
— TDWG abstracts imminent
— Final BiCIKL reporting
— Early TETTRIs marketplace prototype

Looking forward to our next demo, which will be held on the 19th of June, we hope to show the following topics:
— Batch annotation for Machine Annotation Service version 1
— Update MIDS calculation to the latest version
— Linking service with BOLD
— Simplified annotation cases in DiSSCover
— Documentation for supplying data and metadata
— Tombstoning specimen records
— Further implementation of the TETTRIs prototype
— Setup a plan for the MVP (Milestone 3..1)
— Vocabulary Server
— Improve logging/registration and integration with ORCID
— Virtual Collections
— End user testing

DiSSCo Demo 21-02-2024

As the DiSSCo Transition Project (DTP) is in full swing, the DiSSCo Development team will be giving regularly updates on the progress. This progress will mainly focus on Task 3.1 – Further develop the piloted Digital Specimen Architecture (DSarch) into a minimum valuable product. However, as the development team is also involved on other tasks within DTP, as well as other projects such as BiCIKL and TETTRIs, we will also include updates on our work there.

In this first session, we gave a live demo on what the team has been developing in the last couple of months. After the demo, we had a short look at the roadmap and there was room for discussion. The group for this first session was relatively small, we would like to include a broader audience in the future.

The following topics were presented in the demo:
— JSON schema’s exposed through https://schemas.dissco.tech/
— New version of openDS and annotations deployed on https://sandbox.dissco.tech/
— Deeper dive into Machine Annotation Service
— Introduction of Machine Job Records
— Development of generic taxonomic name resolution service, based on ChecklistBank data and GBIF matching algorithm
— New icons for topicDiciplines and specimen
— Discussions and new path forward with minting of DOIs with DataCite
— TETTRIs marketplace mock-ups
— A plan for a CETAF/DiSSCo Collection Registry

Looking forward to our next demo, which is now scheduled on the 24th of April, we hope to show the following topics:
— Mint DOIs with DataCite
— Integrate taxonomic name resolution service in ingestion process
— Batch annotation for Machine Annotation Service v1
— Landing pages for Source Systems, Mappings and Machine Annotation Services
— New filter options, especially for searches on taxonomy
— Scheduling and automatic triggering of translator services based on cron
— Update MIDS calculation to the latest version
— Virtual Collections (stored searches, mutable)
— Infrastructural upgrades
— Centralised logging and monitoring solution
— Automated auto-scaling of EC2 instances (Karpeter)