diff --git a/docs-kits/kits/knowledge-agents/adoption-view/intro.md b/docs-kits/kits/knowledge-agents/adoption-view/intro.md
index 14043870eb8..ece9f1396ee 100644
--- a/docs-kits/kits/knowledge-agents/adoption-view/intro.md
+++ b/docs-kits/kits/knowledge-agents/adoption-view/intro.md
@@ -34,189 +34,169 @@ This document describes the foundations of the (Knowledge) Agents KIT (=Keep It
For more information see
-* Our [Use Case](usecase) description
-* A [Business Value](value) list
-* The [CX-0084 Federated Queries in Data Spaces](https://github.com/catenax-ng/product-catena-x-standardization/blob/CX-0084-FederatedQueriesInDataSpaces/standards/CX-0084-FederatedQueriesInDataSpaces/1.0.0/CX-0084-FederatedQueriesInDataSpaces-v1.0.0.md) standard
-* The [CX-00XX Ontology Models in Catena-X](https://github.com/catenax-ng/product-knowledge/blob/feature/ART3-382-documentation/docs/adoption-view/CX-00XX-Ontology%20Models%20in%20Catena-X_v1.0.0.md) standard
-* The [conformity](testbed) testbed
+* The [CX-0084 Federated Queries in Data Spaces](https://catena-x.net/fileadmin/user_upload/Standard-Bibliothek/Update_September23/CX-0084-FederatedQueriesInDataSpaces-v1.0.0.pdf) standard
+* The [CX-0067 Quality Guidelines for Ontology Models in Catena-X](https://catena-x.net/de/standard-library) standard
* An [Architecture](../development-view/architecture) documentation
* The [Deployment & Conformity](../operation-view/deployment) guide
-## Basic Technology
+### Vision & Mission
-### Dataspace
+#### Vision
-[Data Spaces](https://en.wikipedia.org/wiki/Dataspaces) (short: dataspaces) can be seen as one of the most promising technologies for sovereign data exchange between companies or company divisions.
-They foster new use cases and collaboration scenarios which were not possible before.
-Furthermore, they can be used to streamline and digitize existing processes for more efficient data handling.
+We want to specify a semantically-driven and state-of-the-art compute-to-data architecture for (not only) automotive use cases based on the best [GAIA-X](https://gaia-x.eu/), [W3C](https://www.w3.org/2001/sw/wiki/Main_Page) and [Big Data](https://en.wikipedia.org/wiki/Big_data) practices.
-### GAIA-X
-
-[Gaia-X](https://gaia-x.eu/what-is-gaia-x/deliverables/data-spaces/) defines a reference architecture for dataspaces, with focus on (1) interoperability and (2) portability of data and service, (3) sovereignty over data, and (4) security and trust to achieve a decentralized, federated and open ecosystem.
-
-### Eclipse Tractus-X
-
-[![Basic Dataspace Technology](/img/knowledge-agents/dataspace_small.png)](/img/knowledge-agents/dataspace.png)
-
-[Eclipse Tractus-X](https://eclipse-tractusx.github.io/) is the reference implementation of that concept that is brought forward by the [Catena-X](http://catena-x.net) association.
-It relies on a Peer-to-Peer networking approach where each Business Partner (Consumer or Provider) has a [Connector](https://github.com/eclipse-edc/Connector) which can securely transfer data in the form of files and service calls (payloads) according to mutual contracts. File meta-data, their intrinsic format and the download protocol are standardized using a [Digital Twin Standard](https://industrialdigitaltwin.org/).
-
-For more information, see the [Connector Kit](https://eclipse-tractusx.github.io/docs/category/connector-kit)
-
-## Federated Operations through Agents
-
-In many cases, the standardized transfer of data may already be enough to create value (e.g. exchange precomputed product carbon footprints across a supply chain).
+[![Agent-Oriented Dataspace](/img/knowledge-agents/dataspace_agent_small.png)](/img/knowledge-agents/dataspace_agent.png)
-However, other use cases such as the joint prediction of the behaviour of a complex machine that does not even exists but is developed by a team of collaborating companies, require more advanced solutions.
+#### Mission
-These are solutions in which data and information is federated into a multi-directional semantic context.
+##### Specifications
-### Agent
+We compose specifications for invoking and performing semantic computations (inferences or `skills`) based on normalized and linked data representations (`knowledge graph` described as RDF triples) over the `dataspace`.
-Simply put, an [Agent](https://en.wikipedia.org/wiki/Software_agent) is an extension/companion to the Connector that allows to transfer Business Logic instead of raw data payloads.
+Leveraging existing standards such as [IDS](https://internationaldataspaces.org/), [RDF](https://www.w3.org/2001/sw/wiki/RDF), [SparQL](https://www.w3.org/2001/sw/wiki/SPARQL), [OWL](https://www.w3.org/2001/sw/wiki/OWL), [SHACL](https://www.w3.org/2001/sw/wiki/SHACL) & [EClass](https://eclass.eu/), linked data and corresponding skills may be provisioned, consumed, federated and visualised across the complete dataspace (technically) and hence the complete supply chain (business-wise).
-[![Agent-Oriented Dataspace](/img/knowledge-agents/dataspace_agent_small.png)](/img/knowledge-agents/dataspace_agent.png)
+Skills can be described in tractable sub-languages of well-known declarative syntaxes, such as [SparQL](https://www.w3.org/2001/sw/wiki/SPARQL) (in the future maybe also: [GraphQL](https://en.wikipedia.org/wiki/GraphQL) and [SQL](https://en.wikipedia.org/wiki/SQL)).
-### Skill
+##### Implementations
-Using her/his agent, a Consumer can invoke a [Skill](https://en.wikipedia.org/wiki/Amazon_Alexa) (a kind of Stored Procedure in a standardized Scripting/Query Language) which is then executed
-distributedly in the Dataspace.
+We provide open-source reference implementations of these standards to Tractus-X in particular extending the [Connector KIT](http://localhost:3000/docs-kits/kits/tractusx-edc/docs/kit/adoption-view/Adoption%20View)
-### Binding and Delegation
+These components are called [`agents`](https://en.wikipedia.org/wiki/Software_agent)) because they (semi-)actively negotiate and collaborate with each other (via so-called graph and skill assets) over the dataspace in order to derive higher-level semantic knowledge from the plain, isolated data.
-At the Provider side, the Agent [binds](https://en.wikipedia.org/wiki/Data_binding) to data lakehouses and other backend systems by translating the Skill into native SQL queries or REST API calls.
-Agents may [delegate](https://en.wikipedia.org/wiki/Delegation_(computing)) a part of their work (sub-skills) to other Agents/Business Partners based on data ownership and using recursive dataspace contracts/policies.
+Knowledge agents introduce an ecosystem of efficient services (for data handling, compute, skill orchestration and frontend components) where an optimal matchmaking between those services needs to be reached.
-### Batch Extraction of Knowledge
+##### Support
-A Skill typically simultaneously computes over large [batches](https://en.wikipedia.org/wiki/Batch_processing) of entities (here: digital twins) and extracts high-quality but low-bandwidth result payloads (reports, lists, aggregations; in general: [Knowledge](https://en.wikipedia.org/wiki/Knowledge_extraction)).
+We support use case consumers, app developers, data providers, service providers and IT/domain consultants in order to operate as economically and well-informed as possible by giving them first-class tools, documentation and feedback.
-### Federated Graph
+##### Technology Bridges
-We expect that the raw data for extracting the knowledge from using Skills is organized in a high-degree normal form called a [graph](https://en.wikipedia.org/wiki/Knowledge_graph).
+We define bridges to other digital twin approaches, such as AAS (Asset Administration Shell), such that data and service provisioning into multiple use cases will be as effortless as possible.
-Knowledge Graphs can be regarded as sets of (Subject-Node, Predicate-Edge, Object-Node) triples.
+### Business Value
-Since the raw data is never copied but rather traversed by the Skill Bindings, the Dataspace hence becomes a [federated](https://en.wikipedia.org/wiki/Federated_database_system) or virtual knowledge graph.
+The Agents KIT is the best fit for use case and applications which
+- do not focus on exchanging/analyzing static assets between two peers in the supply chain, but instead require crawling over a whole dynamic branch of the supply tree.
+- do not focus on gaining predefined schemas of digital twins, but need to perform complex search and aggregations over both catalog and assets.
+- require rapidly changing and extensible logic that should reuse existing assets which have already been built for other use cases.
+- need to securely extract & aggregate knowledge from large amounts of assets and/or large assets.
-## Catena-X Standard and KITs
-
-The concrete choices for how the data graphs are to be constructed (using the [Resource Description Framework](https://www.w3.org/RDF/)), how Skills are to be interpreted (using the [SPARQL](https://www.w3.org/TR/sparql11-query/) language) and which vocabulary should be applied by both approaches (using the [Web Ontology Language](https://www.w3.org/OWL/) (OWL)) is subject of the following two [Catena-X e.V. Standards](https://catena-x.net/de/standard-library):
+As a dataspace participant, adopting the Agents KIT will
+- allow you to easily bind your own data and services into the relevant use cases and applications
+- give you the means to integrate your company-internal data sources with the dataspace as one big knowledge graph
-- [CX-0067 Ontology Models in Catena-X](https://catena-x.net/fileadmin/user_upload/Standard-Bibliothek/Update_September_2023/CX-0084-FederatedQueriesInDataSpaces-v1.0.0.pdf)
-- CX-0084 Federated Queries in Dataspaces (Upcoming)
+The following advantages play an important role.
-This Agents KIT bundles a set of FOSS (Free and Open-Source Software) reference implementations of this standard following the Eclipse Tractus-X guidelines.
+#### Widespread Standard
-If you employ any of our artifacts and/or follow our blueprints, you will be eligible for compliance to a respective Catena-X release. Appropriate assessment criteria and methods have been established as a part of the Agent standard.
+##### Isn't this a proprietary approach?
-The Agents KIT is depending on the [Connector Kit](https://eclipse-tractusx.github.io/docs/category/connector-kit)
-
-The Agents KIT is the basis for other, use-case specific Agent-enabled KITs, services and applications, such as the [Behaviour Twin Remaining Useful Life (RUL Kit](/docs-kits/kits/Behaviour%20Twin%20RuL%20Kit/Adoption%20View%20Remaining%20Useful%20Life%20Kit)
+The underlying [API](https://en.wikipedia.org/wiki/API), protocols, standards and technologies are first-class citizens of the official [Gaia-X](https://gaia-x.eu/what-is-gaia-x/deliverables/data-spaces/) & [W3C Semantic Web](https://www.w3.org/standards/semanticweb/) portfolio.
+These techs have been already adopted globally for a plethora of domains, use cases and derived (Open Source & commercial) projects.
+Using these approaches will give you a competitive advantage which is even independent of the concrete dataspace instance/application that you are targeting at.
-## Abstract Use Case
+#### No Redundancy
-We distinguish between Dataspace Participants and other parties (who support the Dataspace Participants).
+##### Is this a replacement to the existing Aspect Meta Model (BAMM/SAMM) & Asset Administration Shell (AAS) approach?
-[![Dataspace Roles](/img/knowledge-agents/dataspace_roles_small.png)](/img/knowledge-agents/dataspace_roles.png)
+Agent technology is a complement that means that both approaches can be deployed in co-existance.
-### Dataspace Participants
+There will be some use cases (large interconnected datasets, ad-hoc querying, inference of derived knowledge) which enfavour the knowledge agents approach, others (simple access to already identified remote twins) will more adequately stay with the BAMM/SAMM & AAS approach.
-The following stakeholders should [deploy](../operation-view/deployment) modules/artifacts of the Agents Kit.
-In particular, each Dataspace Participant needs an [Agent-Enabled Connector](../operation-view/agent_edc).
+For the data providers, it will be easy to mount their artifacts (files, data source partitions, backend interfaces) under both types of assets (submodels, graphs). We provide [bridging technology](../development-view/aas/bridge) for that purpose.
-#### Consumer
+For the app developers it will be easy to use both [SDK](https://en.wikipedia.org/wiki/Software_development_kit)s over a single consumer connector and even interchange the identifiers/[IRI](https://en.wikipedia.org/wiki/Internationalized_Resource_Identifier)s.
-Any party who wants to use data and logic using Agent Technology (for example by employing Agent-Enabled Applications or Services), such as a Recycling Company or a Fleet Manager
+For the modellers, there is only a loose coupling between a protocol-independent, inference-agnostic data format description, such as BAMM/SAMM, and a protocol-binding, but data-format independent inference/semantic model, such as OWL-R. We expect tools to generate at least the latter from ubiquitous Excel/Tabular specifications. We could also imagine a kind of OWL-R to BAMM/SAMM embedding (but not vice versa) once this is needed by a use case.
-#### Provider
+#### Enhanced Security
-We distinguish Providers whether they want to publish data or logic using Agent Technology
+##### Isn't it inherently insecure to let arbitrary Dataspace tenants invoke ad-hoc computations in my backend?
-##### Data Provider
+First, these are not arbitrary tenants, but access is only given to business partners with whom you have signed contracts (and who appear in certain roles there).
+A Skill request from a non-authorized chain of computation would not be able to enter your backend at all.
-Any party who provides data (for example by a backend database or other Agent-enabled Applications or Services), for example an Automotive OEM (original equipment manufacturer)
+Furthermore, you would not expose your backend directly, but rather introduce a [virtualization layer](../development-view/architecture) between the Agent and your data source. This introduces another (role-based) security domain by appropriate sub-schemas and filters. So different contracts can be mapped to different security principals/data views in the backend.
-##### Function Provider
+We do not introduce arbitrary (turing-equivalent, hence undecidable) ad-hoc computations, but the [SPARQL](../development-view/sparql) standard introduces a well-defined set of operations whose effects and consequences can be checked and validated in advance (hypervision).
-Any party who provides proprietary functions (for example by a [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) endpoint or other Agent-enabled Applications or Services), for example a Tier1 Sensor Device Supplier
+Finally, we are investigating a form of differential privacy which introduces noise between your data source and its graph representation such that original values can be effectively hidden from the reporting output.
-##### Skill (=Compute) Provider
+#### Easy Deployment
-Any party who provides compute resources and/or procedural logic (for example by a server or other Agent-enabled Applications or Services), for example a Recycling Software Specialist
+##### Doesn't this impose additional burdens to the dataspace participants?
-##### Core Service Provider
+For data consumers, there is virtually nothing to do. All they have to care for is to add an Agent-Enabled data plane to their connector (or even use our Agent Plane as a fully-blown replacement for the Http/AmazonS3 standard of Tractus-X).
-Any party offering ontology models (semantic/ontology hub) or federated catalogues, for example an Operating Company
+For smaller data and skill providers, there will be the possibility to host non-critical data directly through the storage facilities of the Agent Plane.
-### Additional Stakeholders
+For all others, they will employ techniques for data virtualization anyway to scale and shield their critical data. That is where the binding agents as one additional container/layer that is declaratively described (not: programmatically) come into play.
-The following stakeholders should [interface or implement](../development-view/architecture) modules of the Agents Kit.
+#### Great Scalability
-#### Business Developer
+##### How could such a scheme be efficient at all
-Any party who publishes an Application, Standard or KIT based on Agent Technology on behalf of a Dataspace Participant (e.g. a Fleet Monitor, an Incident Reporting Solution, a Telematics KIT)
+Our technology has been thoroughly developed, tested and piloted over the years 2022 and 2023. One key component is the ability of any Agent to delegate
+a part of its work to other Business Partners/Agents and hence to bring the computations close to the actual data. This delegation pattern has several very nice properties:
-#### Enablement Service Developer
+* Delegation is dynamic based on the supply chain(s) that are described in the actual data. So the actual computation chain optimizes with the data.
+* Delegation is parallelized in the sense that several suppliers are requested simultaneously. Latency is hence minimized.
+* Delegation may be opaque from the consumer view if contracts require so.
-Any party who offers ready-made artifacts, packages and managed services assisting Dataspace Participants/Applications to process data using Agent technology (e.g. a Graph Database, a Virtual Graph Binding Engine, an EDC Package)
+### Use Cases
-## Why should I Adopt Agent Technology?
+[![Dataspace Roles](/img/knowledge-agents/dataspace_roles_small.png)](/img/knowledge-agents/dataspace_roles.png)
-### Widespread Standard
+The Agents KIT is the basis for other, use-case specific Agent-enabled KITs, services and applications, such as the [Behaviour Twin Remaining Useful Life (RUL Kit](/docs-kits/kits/Behaviour%20Twin%20RuL%20Kit/Adoption%20View%20Remaining%20Useful%20Life%20Kit)
-#### Isn't this a proprietary approach?
+We distinguish between Dataspace Participants and other parties (who support the Dataspace Participants).
-The underlying [API](https://en.wikipedia.org/wiki/API), protocols, standards and technologies are first-class citizens of the official [Gaia-X](https://gaia-x.eu/what-is-gaia-x/deliverables/data-spaces/) & [W3C Semantic Web](https://www.w3.org/standards/semanticweb/) portfolio.
-These techs have been already adopted globally for a plethora of domains, use cases and derived (Open Source & commercial) projects.
-Using these approaches will give you a competitive advantage which is even independent of the concrete dataspace instance/application that you are targeting at.
+#### Dataspace Participants
-### No Redundancy
+The following stakeholders should [deploy](../operation-view/deployment) modules/artifacts of the Agents Kit.
+In particular, each Dataspace Participant needs an [Agent-Enabled Connector](../operation-view/agent_edc).
-#### Is this a replacement to the existing Aspect Meta Model (BAMM/SAMM) & Asset Administration Shell (AAS) approach?
+##### Consumer
-Agent technology is a complement that means that both approaches can be deployed in co-existance.
+Any party who wants to use data and logic using Agent Technology (for example by employing Agent-Enabled Applications or Services), such as a Recycling Company or a Fleet Manager
-There will be some use cases (large interconnected datasets, ad-hoc querying, inference of derived knowledge) which enfavour the knowledge agents approach, others (simple access to already identified remote twins) will more adequately stay with the BAMM/SAMM & AAS approach.
+##### Provider
-For the data providers, it will be easy to mount their artifacts (files, data source partitions, backend interfaces) under both types of assets (submodels, graphs). We provide [bridging technology](../development-view/aas/bridge) for that purpose.
+We distinguish Providers whether they want to publish data or logic using Agent Technology
-For the app developers it will be easy to use both [SDK](https://en.wikipedia.org/wiki/Software_development_kit)s over a single consumer connector and even interchange the identifiers/[IRI](https://en.wikipedia.org/wiki/Internationalized_Resource_Identifier)s.
+###### Data Provider
-For the modellers, there is only a loose coupling between a protocol-independent, inference-agnostic data format description, such as BAMM/SAMM, and a protocol-binding, but data-format independent inference/semantic model, such as OWL-R. We expect tools to generate at least the latter from ubiquitous Excel/Tabular specifications. We could also imagine a kind of OWL-R to BAMM/SAMM embedding (but not vice versa) once this is needed by a use case.
+Any party who provides data (for example by a backend database or other Agent-enabled Applications or Services), for example an Automotive OEM (original equipment manufacturer)
-### Enhanced Security
+###### Function Provider
-#### Isn't it inherently insecure to let arbitrary Dataspace tenants invoke ad-hoc computations in my backend?
+Any party who provides proprietary functions (for example by a [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) endpoint or other Agent-enabled Applications or Services), for example a Tier1 Sensor Device Supplier
-First, these are not arbitrary tenants, but access is only given to business partners with whom you have signed contracts (and who appear in certain roles there).
-A Skill request from a non-authorized chain of computation would not be able to enter your backend at all.
+###### Skill (=Compute) Provider
-Furthermore, you would not expose your backend directly, but rather introduce a [virtualization layer](../development-view/architecture) between the Agent and your data source. This introduces another (role-based) security domain by appropriate sub-schemas and filters. So different contracts can be mapped to different security principals/data views in the backend.
+Any party who provides compute resources and/or procedural logic (for example by a server or other Agent-enabled Applications or Services), for example a Recycling Software Specialist
-We do not introduce arbitrary (turing-equivalent, hence undecidable) ad-hoc computations, but the [SPARQL](../development-view/sparql) standard introduces a well-defined set of operations whose effects and consequences can be checked and validated in advance (hypervision).
+###### Core Service Provider
-Finally, we are investigating a form of differential privacy which introduces noise between your data source and its graph representation such that original values can be effectively hidden from the reporting output.
+Any party offering ontology models (semantic/ontology hub) or federated catalogues, for example an Operating Company
-### Easy Deployment
+#### Additional Stakeholders
-#### Doesn't this impose additional burdens to the dataspace participants?
+The following stakeholders should [interface or implement](../development-view/architecture) modules of the Agents Kit.
-For data consumers, there is virtually nothing to do. All they have to care for is to add an Agent-Enabled data plane to their connector (or even use our Agent Plane as a fully-blown replacement for the Http/AmazonS3 standard of Tractus-X).
+##### Business Developer
-For smaller data and skill providers, there will be the possibility to host non-critical data directly through the storage facilities of the Agent Plane.
+Any party who publishes an Application, Standard or KIT based on Agent Technology on behalf of a Dataspace Participant (e.g. a Fleet Monitor, an Incident Reporting Solution, a Telematics KIT)
-For all others, they will employ techniques for data virtualization anyway to scale and shield their critical data. That is where the binding agents as one additional container/layer that is declaratively described (not: programmatically) come into play.
+##### Enablement Service Developer
-### Great Scalability
+Any party who offers ready-made artifacts, packages and managed services assisting Dataspace Participants/Applications to process data using Agent technology (e.g. a Graph Database, a Virtual Graph Binding Engine, an EDC Package)
-#### How could such a scheme be efficient at all
+### Catena-X Standards
-Our technology has been thoroughly developed, tested and piloted over the years 2022 and 2023. One key component is the ability of any Agent to delegate
-a part of its work to other Business Partners/Agents and hence to bring the computations close to the actual data. This delegation pattern has several very nice properties:
+The concrete choices for how the data graphs are to be constructed (using the [Resource Description Framework](https://www.w3.org/RDF/)), how Skills are to be interpreted (using the [SPARQL](https://www.w3.org/TR/sparql11-query/) language) and which vocabulary should be applied by both approaches (using the [Web Ontology Language](https://www.w3.org/OWL/) (OWL)) is subject of the following two [Catena-X e.V. Standards](https://catena-x.net/de/standard-library):
-* Delegation is dynamic based on the supply chain(s) that are described in the actual data. So the actual computation chain optimizes with the data.
-* Delegation is parallelized in the sense that several suppliers are requested simultaneously. Latency is hence minimized.
-* Delegation may be opaque from the consumer view if contracts require so.
+- [CX-0084 Federated Queries in Dataspaces](https://catena-x.net/fileadmin/user_upload/Standard-Bibliothek/Update_September_2023/CX-0084-FederatedQueriesInDataSpaces-v1.0.0.pdf)
+- [CX-0067 Ontology Models in Catena-X (Upcoming)](https://catena-x.net/de/standard-library)
(C) 2021,2023 Contributors to the Eclipse Foundation. SPDX-License-Identifier: CC-BY-4.0
diff --git a/docs-kits/kits/knowledge-agents/operation-view/deployment.md b/docs-kits/kits/knowledge-agents/operation-view/deployment.md
index 73251dd7747..0068fc5675f 100644
--- a/docs-kits/kits/knowledge-agents/operation-view/deployment.md
+++ b/docs-kits/kits/knowledge-agents/operation-view/deployment.md
@@ -31,6 +31,7 @@ title: Deployment
![Agents Kit Banner](/img/knowledge-agents/AgentsKit-Icon.png)
This document describes the deployment of the (Knowledge) Agents KIT (=Keep It Together) depending on the role that the respective tenant/business partner has.
+It also provides a runbook for deploying a minimal stable environment for testing purposes.
For more information see
@@ -94,4 +95,450 @@ As a function provider, you want to
* [bridge](bridge) between the Knowledge Agent and Asset Administration Shell APIs.
+## Runbook For Deploying and Smoke-Testing Knowledge Agents (Stable)
+
+Knowledge Agents on Stable is deployed on the following two tenants
+- App Provider 1 (BPNL000000000001)
+ - Agent-Enabled Dataspace Connector
+ - In-Memory Hashicorp-Vault Control Plane
+ - Hashicorp-Vault Agent Data Plane
+ - Provisioning Agent incl. Local Database
+ - Remoting Agent
+- App Consumer 4 (BPNL0000000005VV)
+ - Agent-Enabled Dataspace Connector
+ - In-Memory Hashicorp-Vault Control Plane
+ - Hashicorp-Vault Agent Data Plane
+
+### 1. Prepare the Two Tenants
+
+As a first step, we installed two technical users for the dataspace connectors using the https://portal.stable.demo.catena-x.net
+- App Provider 1: sa4
+- App Consumer 4: sa5
+
+
+The generated secrets have been installed under https://vault.demo.catena-x.net/ui/vault/secrets/knowledge
+- stable-provider-miw
+- stable-consumer-miw
+
+Further secrets have been installed
+- oem-cert
+- oem-key
+- oem-symmetric-key
+- consumer-cert
+- consumer-key
+- consumer-symmetric-key
+
+Finally an access token has been generated.
+
+### 2. Deploy Agent-Enabled Connector's
+
+Using https://argo.stable.demo.catena-x.net/settings/projects/project-knowledge the following two applications have been installed.
+
+We give the complete manifests but hide the secrets.
+
+#### App Provider 1 Datspace Connector Manifest
+
+```yaml
+project: project-knowledge
+source:
+ repoURL: 'https://eclipse-tractusx.github.io/charts/dev'
+ targetRevision: 1.9.8
+ plugin:
+ env:
+ - name: HELM_VALUES
+ value: |
+ participant:
+ id: BPNL000000000001
+ nameOverride: agent-connector-provider
+ fullnameOverride: agent-connector-provider
+ vault:
+ hashicorp:
+ enabled: true
+ url: https://vault.demo.catena-x.net
+ token: ****
+ healthCheck:
+ enabled: false
+ standbyOk: true
+ paths:
+ secret: /v1/knowledge
+ secretNames:
+ transferProxyTokenSignerPrivateKey: oem-key
+ transferProxyTokenSignerPublicKey: oem-cert
+ transferProxyTokenEncryptionAesKey: oem-symmetric-key
+ controlplane:
+ securityContext:
+ readOnlyRootFilesystem: false
+ image:
+ pullPolicy: Always
+ ssi:
+ miw:
+ # -- MIW URL
+ url: "https://managed-identity-wallets-new.stable.demo.catena-x.net"
+ # -- The BPN of the issuer authority
+ authorityId: "BPNL00000003CRHK"
+ oauth:
+ # -- The URL (of KeyCloak), where access tokens can be obtained
+ tokenurl: "https://centralidp.stable.demo.catena-x.net/auth/realms/CX-Central/protocol/openid-connect/token"
+ client:
+ # -- The client ID for KeyCloak
+ id: "sa4"
+ # -- The alias under which the client secret is stored in the vault.
+ secretAlias: "stable-provider-miw"
+ endpoints:
+ management:
+ authKey: ****
+ ## Ingress declaration to expose the network service.
+ ingresses:
+ - enabled: true
+ # -- The hostname to be used to precisely map incoming traffic onto the underlying network service
+ hostname: "agent-provider-cp.stable.demo.catena-x.net"
+ # -- EDC endpoints exposed by this ingress resource
+ endpoints:
+ - protocol
+ - management
+ - control
+ # -- Enables TLS on the ingress resource
+ tls:
+ enabled: true
+ dataplanes:
+ dataplane:
+ securityContext:
+ readOnlyRootFilesystem: false
+ image:
+ pullPolicy: Always
+ configs:
+ dataspace.ttl: |-
+ ################################################
+ # Catena-X Agent Bootstrap
+ ################################################
+ @prefix : .
+ @prefix cx: .
+ @prefix cx-common: .
+ @prefix rdf: .
+ @prefix xsd: .
+ @prefix bpnl: .
+ @base .
+
+ bpnl:BPNL000000000001 cx:hasBusinessPartnerNumber "BPNL000000000001"^^xsd:string;
+ cx:hasConnector ;
+ cx-common:hasConnector .
+
+ bpnl:BPNL0000000005VV cx:hasBusinessPartnerNumber "BPNL0000000005VV"^^xsd:string;
+ cx:hasConnector ;
+ cx-common:hasConnector .
+ agent:
+ #synchronization: 360000
+ connectors:
+ - https://agent-provider-cp.stable.demo.catena-x.net
+
+ ## Ingress declaration to expose the network service.
+ ingresses:
+ - enabled: true
+ hostname: "agent-provider-dp.stable.demo.catena-x.net"
+ # -- EDC endpoints exposed by this ingress resource
+ endpoints:
+ - public
+ - default
+ - control
+ - callback
+ # -- Enables TLS on the ingress resource
+ tls:
+ enabled: true
+ chart: agent-connector-memory
+destination:
+ server: 'https://kubernetes.default.svc'
+ namespace: product-knowledge
+```
+
+#### App Consumer 4 Datspace Connector Manifest
+
+```yaml
+project: project-knowledge
+source:
+ repoURL: 'https://eclipse-tractusx.github.io/charts/dev'
+ targetRevision: 1.9.8
+ plugin:
+ env:
+ - name: HELM_VALUES
+ value: |
+ participant:
+ id: BPNL0000000005VV
+ nameOverride: agent-connector-consumer
+ fullnameOverride: agent-connector-consumer
+ vault:
+ hashicorp:
+ enabled: true
+ url: https://vault.demo.catena-x.net
+ token: ****
+ healthCheck:
+ enabled: false
+ standbyOk: true
+ paths:
+ secret: /v1/knowledge
+ secretNames:
+ transferProxyTokenSignerPrivateKey: consumer-key
+ transferProxyTokenSignerPublicKey: consumer-cert
+ transferProxyTokenEncryptionAesKey: consumer-symmetric-key
+ controlplane:
+ securityContext:
+ readOnlyRootFilesystem: false
+ image:
+ pullPolicy: Always
+ ssi:
+ miw:
+ # -- MIW URL
+ url: "https://managed-identity-wallets-new.stable.demo.catena-x.net"
+ # -- The BPN of the issuer authority
+ authorityId: "BPNL00000003CRHK"
+ oauth:
+ # -- The URL (of KeyCloak), where access tokens can be obtained
+ tokenurl: "https://centralidp.stable.demo.catena-x.net/auth/realms/CX-Central/protocol/openid-connect/token"
+ client:
+ # -- The client ID for KeyCloak
+ id: "sa5"
+ # -- The alias under which the client secret is stored in the vault.
+ secretAlias: "stable-consumer-miw"
+ endpoints:
+ management:
+ authKey: ***
+ ## Ingress declaration to expose the network service.
+ ingresses:
+ - enabled: true
+ # -- The hostname to be used to precisely map incoming traffic onto the underlying network service
+ hostname: "agent-consumer-cp.stable.demo.catena-x.net"
+ # -- EDC endpoints exposed by this ingress resource
+ endpoints:
+ - protocol
+ - management
+ - control
+ # -- Enables TLS on the ingress resource
+ tls:
+ enabled: true
+ dataplanes:
+ dataplane:
+ securityContext:
+ readOnlyRootFilesystem: false
+ image:
+ pullPolicy: Always
+ configs:
+ dataspace.ttl: |-
+ ################################################
+ # Catena-X Agent Bootstrap
+ ################################################
+ @prefix : .
+ @prefix cx: .
+ @prefix cx-common: .
+ @prefix rdf: .
+ @prefix xsd: .
+ @prefix bpnl: .
+ @base .
+
+ bpnl:BPNL000000000001 cx:hasBusinessPartnerNumber "BPNL000000000001"^^xsd:string;
+ cx:hasConnector ;
+ cx-common:hasConnector .
+
+ bpnl:BPNL0000000005VV cx:hasBusinessPartnerNumber "BPNL0000000005VV"^^xsd:string;
+ cx:hasConnector ;
+ cx-common:hasConnector .
+ agent:
+ # synchronization: 360000
+ connectors:
+ - https://agent-provider-cp.stable.demo.catena-x.net
+
+ ## Ingress declaration to expose the network service.
+ ingresses:
+ - enabled: true
+ hostname: "agent-consumer-dp.stable.demo.catena-x.net"
+ # -- EDC endpoints exposed by this ingress resource
+ endpoints:
+ - public
+ - default
+ - control
+ - callback
+ # -- Enables TLS on the ingress resource
+ tls:
+ enabled: true
+ chart: agent-connector-memory
+destination:
+ server: 'https://kubernetes.default.svc'
+ namespace: product-knowledge
+```
+
+### 3. Deploy App Provider 1 Provisioning Agent
+
+Using https://argo.stable.demo.catena-x.net/settings/projects/project-knowledge the following application has been installed.
+
+For simplicity, the provisioning agent exposes a builtin sample H2 database as a graph and therefore needs to write the file system with its non-root account.
+Therefore, some of the following settings are specific to stable and will not be used under productive settings.
+
+```yaml
+project: project-knowledge
+source:
+ repoURL: 'https://eclipse-tractusx.github.io/charts/dev'
+ targetRevision: 1.9.8
+ plugin:
+ env:
+ - name: HELM_VALUES
+ value: |
+ securityContext:
+ readOnlyRootFilesystem: false
+ runAsUser: 999
+ runAsGroup: 999
+ runAsUser: 999
+ podSecurityContext:
+ runAsGroup: 999
+ fsGroup: 999
+ bindings:
+ dtc:
+ port: 8080
+ settings:
+ jdbc.url: "jdbc:h2:file:/opt/ontop/database/db;INIT=RUNSCRIPT FROM '/opt/ontop/data/dtc.sql'"
+ jdbc.driver: "org.h2.Driver"
+ ontop.cardinalityMode: "LOOSE"
+ mapping: |
+ [PrefixDeclaration]
+ cx-common: https://w3id.org/catenax/ontology/common#
+ cx-core: https://w3id.org/catenax/ontology/core#
+ cx-vehicle: https://w3id.org/catenax/ontology/vehicle#
+ cx-reliability: https://w3id.org/catenax/ontology/reliability#
+ uuid: urn:uuid:
+ bpnl: bpn:legal:
+ owl: http://www.w3.org/2002/07/owl#
+ rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns#
+ xml: http://www.w3.org/XML/1998/namespace
+ xsd: http://www.w3.org/2001/XMLSchema#
+ json: https://json-schema.org/draft/2020-12/schema#
+ obda: https://w3id.org/obda/vocabulary#
+ rdfs: http://www.w3.org/2000/01/rdf-schema#
+ oem: urn:oem:
+
+ [MappingDeclaration] @collection [[
+ mappingId dtc-meta
+ target bpnl:{bpnl} rdf:type cx-common:BusinessPartner ; cx-core:id {bpnl}^^xsd:string .
+ source SELECT distinct "bpnl" FROM "dtc"."meta"
+
+ mappingId dtc-content
+ target oem:Analysis/{id} rdf:type cx-reliability:Analysis ; cx-core:id {code}^^xsd:string ; cx-core:name {description}^^xsd:string .
+ source SELECT * FROM "dtc"."content"
+
+ mappingId dtc-part
+ target oem:Part/{entityGuid} rdf:type cx-vehicle:Part ; cx-core:id {enDenomination}^^xsd:string ; cx-core:name {classification}^^xsd:string .
+ source SELECT * FROM "dtc"."part"
+
+ mappingId dtc-meta-part
+ target oem:Part/{entityGuid} cx-vehicle:manufacturer bpnl:{bpnl}.
+ source SELECT "bpnl","entityGuid" FROM "dtc"."part"
+
+ mappingId dtc-part-content
+ target oem:Analysis/{dtc_id} cx-reliability:analysedObject oem:Part/{part_entityGuid}.
+ source SELECT "part_entityGuid","dtc_id" FROM "dtc"."content_part"
+
+ ]]
+ chart: provisioning-agent
+destination:
+ server: 'https://kubernetes.default.svc'
+ namespace: product-knowledge
+```
+
+### 4. Deploy App Provider 1 Remoting Agent
+
+Using https://argo.stable.demo.catena-x.net/settings/projects/project-knowledge the following application has been installed.
+
+For simplicity, the remoting agent exposes a simply public API as a graph.
+
+```yaml
+project: project-knowledge
+source:
+ repoURL: 'https://eclipse-tractusx.github.io/charts/dev'
+ targetRevision: 1.9.8
+ plugin:
+ env:
+ - name: HELM_VALUES
+ value: |
+ image:
+ pullPolicy: Always
+ repositories:
+ prognosis: |
+ #
+ # Rdf4j configuration for prognosis remoting
+ #
+ @prefix rdf: .
+ @prefix rdfs: .
+ @prefix rep: .
+ @prefix sr: .
+ @prefix sail: .
+ @prefix sp: .
+ @prefix xsd: .
+ @prefix json: .
+ @prefix dcterms: .
+ @prefix cx-fx: .
+ @prefix cx-common: .
+ @prefix cx-prognosis: .
+ @prefix cx-rt: .
+
+ [] rdf:type rep:Repository ;
+ rep:repositoryID "prognosis" ;
+ rdfs:label "Prognosis Functions" ;
+ rep:repositoryImpl [
+ rep:repositoryType "openrdf:SailRepository" ;
+ sr:sailImpl [
+ sail:sailType "org.eclipse.tractusx.agents:Remoting" ;
+ cx-fx:callbackAddress ;
+ cx-fx:supportsInvocation cx-prognosis:Prognosis;
+ ]
+ ].
+
+ cx-prognosis:Prognosis rdf:type cx-fx:Function;
+ dcterms:description "Prognosis is a sample simulation function with input and output bindings."@en ;
+ dcterms:title "Prognosis" ;
+ cx-fx:targetUri "https://api.agify.io";
+ cx-fx:input cx-prognosis:name;
+ cx-fx:result cx-prognosis:hasResult.
+
+ cx-prognosis:hasResult rdf:type cx-fx:Result;
+ cx-fx:output cx-prognosis:prediction;
+ cx-fx:output cx-prognosis:support.
+
+ cx-prognosis:name rdf:type cx-fx:Argument;
+ dcterms:description "Name is an argument to the Prognosis function."@en ;
+ dcterms:title "Name";
+ cx-fx:argumentName "name".
+
+ cx-prognosis:prediction rdf:type cx-fx:ReturnValue;
+ dcterms:description "Prediction (Value) is an integer-based output of the Prognosis function."@en ;
+ dcterms:title "Prediction" ;
+ cx-fx:valuePath "age";
+ cx-fx:dataType xsd:int.
+
+ cx-prognosis:support rdf:type cx-fx:ReturnValue;
+ dcterms:description "Support (Value) is another integer-based output of the Prognosis function."@en ;
+ dcterms:title "Support" ;
+ cx-fx:valuePath "count";
+ cx-fx:dataType xsd:int.
+ chart: remoting-agent
+destination:
+ server: 'https://kubernetes.default.svc'
+ namespace: product-knowledge
+```
+
+### 5. Perform Smoke Tests
+
+We provide a [Postman collection](https://www.postman.com/catena-x/workspace/catena-x-knowledge-agents/folder/2757771-f11c5dda-cc04-444f-b38b-3deb3c098478?action=share&creator=2757771&ctx=documentation&active-environment=2757771-31115ff3-61d7-4ad6-8310-1e50290a1c3a) and a corresponding [environment](https://www.postman.com/catena-x/workspace/catena-x-knowledge-agents/environment/2757771-31115ff3-61d7-4ad6-8310-1e50290a1c3a?action=share&creator=2757771&active-environment=2757771-3a7489c5-7540-470b-8e44-04610511d9a9)
+
+It consists of the following steps:
+- Query Provider Agent (Internally)
+- Query Provider Agent (Internally from Agent Plane)
+- Query Remoting Agent (Internally)
+- Query Remoting Agent (Internally from Agent Plane)
+- Create Graph Policy (Provider)
+- Create Graph Contract (Provider)
+- Create Data Graph Asset (Provider)
+- Create Function Graph Asset (Provider)
+- Show Own Catalogue (Provider)
+- Show Remote Catalogue (Consumer)
+- Query Data Graph Asset (Consumer)
+- Query Function Graph Asset (Consumer)
+
+
+
(C) 2021,2023 Contributors to the Eclipse Foundation. SPDX-License-Identifier: CC-BY-4.0
diff --git a/docs-kits/kits/knowledge-agents/page_changelog.md b/docs-kits/kits/knowledge-agents/page_changelog.md
index 0a001303230..4f4dcae6b4f 100644
--- a/docs-kits/kits/knowledge-agents/page_changelog.md
+++ b/docs-kits/kits/knowledge-agents/page_changelog.md
@@ -33,6 +33,16 @@ sidebar_position: 1
All notable changes to the (Knowledge) Agents KIT (=Keep It Together) will be documented in this file.
+## [1.0] - 2023-11-17
+
+### Added
+
+- Stable Deployment Example
+
+### Changed
+
+- Simplified Adoption View
+
## [0.1] - 2023-09-29
### Added