Yet as AI scales, a different kind of friction begins to dominate. Not compute. Not model performance. But trust. Under what rules is data being used? Who is accountable when AI systems act across organizational boundaries? How can consent, provenance, and compliance be enforced end‑to‑end – especially when systems learn, adapt, and interact autonomously?
These questions are now the real bottleneck for AI at scale. And they are precisely where data spaces come into focus.
What data spaces already provide
Long before AI became a board‑level priority, data spaces were designed to address precisely these governance challenges. Through shared standards and specifications, they establish identity and participation rules, define machine‑readable usage policies, enable enforceable data contracts, and ensure traceable data provenance across organizations.
Data spaces operate as an operational trust layer, one that allows data to flow while remaining under the control of its providers.
Complementary, not competing, layers
At the same time, new AI‑specific protocols are emerging to solve technical problems. Protocols such as the Model Context Protocol describe how AI systems access tools, retrieve context, or interact with resources. They focus on interaction mechanics.
Data spaces address a complementary question: under which rules these interactions take place across organizational boundaries, and how accountability is maintained when systems act autonomously. Seen together, these layers do not compete. They complete each other.
The relationship is simple but often misunderstood. AI protocols explain how systems interact. Data spaces define the conditions under which those interactions are allowed, governed, and trusted. Better together is not a slogan, but an accurate description of this division of responsibility.
Making the connection visible
Within IDSA, this convergence of AI and data spaces has led to focused work. A dedicated Task Force on AI and Data Spaces has been established to document how existing data space standards apply to concrete AI scenarios already being deployed. These scenarios range from retrieval and inference to federated learning and agent‑based workflows.
The task force is not developing new protocols or architectures. Its mandate is to translate what already exists into guidance that is accessible to AI practitioners beyond the traditional data space community. Work is carried out transparently, with a public scoping paper planned for release by the end of June 2026.
Trust as infrastructure
Trust and data sovereignty are not features that AI systems can simply add at the end. They are infrastructure. And in the case of data spaces, that infrastructure is already in place.
“As AI continues to move deeper into operational environments, this perspective reframes the discussion away from future promises and toward present capabilities. It highlights that many of the governance challenges facing AI today have already been addressed elsewhere, quietly, through years of standardization and deployment.”
Reinhold Achatz, Chairman of the Board of IDSA
AI and data spaces are not separate conversations. They are part of the same system. And increasingly, they are better understood together.









