For the design and implementation of data space components, the RAM offers practical guidelines. It serves as both an introduction to software architecture and a handbook of well-established best practices. What are the fundamentals of data spaces and the RAM? The first one: data sharing is always peer-to-peer. Data comes from the data provider and is transferred directly to the data consumer. There is no intermediary. A data owner or data holder is responsible for the data. It can be shared with someone else, and a policy on how the consumer is allowed to use the data can be put on. That is the basic idea. No less, no more.
For this to work, we need interoperability. There are four levels of interoperability: legal, organizational, technical, and semantic. The legal and organizational levels are mainly handled by an authority that manages the data space. You can learn more about this in the IDSA Rulebook. The job of the data space participant, as a data provider or consumer, is to take care of technical and semantic interoperability.
Data must have a clear meaning
When participants create a data asset, a data offering, they must first provide a description, annotate it with some semantics to be able to share it. For example, temperature data can only be shared when certain aspects are clarified: Is it my body temperature or is it room temperature? Is it in degrees Celsius or Fahrenheit? Data must have a clear meaning. Semantics help to describe data and other participants to find the data. This information, the metadata, must be published and shared across the data space.
After describing the data offering, the participants need to negotiate the usage policy and can agree on the contract. Again, this is part of the RAM. Then they start the data transaction by granting access to a resource, providing the data in a way it can be consumed. This important point is described in the RAM as well, where we also have links to the Dataspace Protocol. More on this in a minute.
These technical matters need to be managed. The generation of offerings, contract negotiations, and how to manage the data transfer from a business perspective are very specific to a data space. The data space participants provide data planes for their use cases, data spaces, and domain.
Then there is the data connector and its functional components. On one side, it connects to the data space participants – here, technical interoperability is central. On the other side, it connects to the system and its IoT services, data analytics, etc. Within the connector, some functionalities are required for data management, access control, and usage policy management. These are aspects that must be taken care of when building a connector. We define them in the Reference Architecture. What we don’t offer is software that can be implemented and immediately used, as every data sharing use case needs some degrees of freedom to customize it to the specific needs.
Dataspace Protocol as foundation for sovereign data sharing
IDSA is working on the Dataspace Protocol because companies and other organizations will implement different connectors that need to be interoperable. Some technical specifications are necessary for this fundamental interoperability. Data sharing requires the provision of metadata to facilitate the transfer of data by a data transfer protocol. For this, we rely on existing standards such as W3C standards. We need to describe how two participants must behave if they want to connect, negotiate, and initiate the data transfer in a modular approach.
The Dataspace Protocol is the foundation for sovereign data sharing, for interoperability, and to manage the policies. The goal is an international standard, such as an ISO standard, that everybody can use and trust for their business. The Data Space Protocol will be finished by the end of the year. The RAM 4 is stable – a new version is currently being worked on to include new concepts from the IDSA Rulebook and add more modularity. Because the IDS-RAM is strong and agile – as is the animal!