diff --git a/SUMMARY.md b/SUMMARY.md index 137ebaa9..d2fb5035 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -373,7 +373,12 @@ * [Registration Tool Kit](utilities-and-tools/registration-tool-kit.md) * [Monitoring and Reporting](monitoring-and-reporting/README.md) * [Apache Superset](monitoring-and-reporting/apache-superset.md) - * [Reporting Framework](monitoring-and-reporting/reporting-framework.md) + * [Reporting Framework](monitoring-and-reporting/reporting-framework/README.md) + * [📔 User Guides](monitoring-and-reporting/reporting-framework/user-guides/README.md) + * [Connector Creation Guide](monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md) + * [Dashboards Creation Guide](monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md) + * [Installation & Troubleshooting](monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md) + * [Kafka Connect Transform Reference](monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md) * [System Logging](monitoring-and-reporting/logging.md) * [System Health](monitoring-and-reporting/system-health.md) * [Privacy and Security](privacy-and-security/README.md) diff --git a/deployment/deployment-guide/keycloak-client-creation.md b/deployment/deployment-guide/keycloak-client-creation.md index c136c16d..169ac363 100644 --- a/deployment/deployment-guide/keycloak-client-creation.md +++ b/deployment/deployment-guide/keycloak-client-creation.md @@ -31,13 +31,17 @@ The steps to create a Keycloak client are given below. * Authentication flow: Select the `Standard flow` and `Service accounts roles` * Valid redirect URIs: `*` 4. Save the changes and click the _**Credentials**_ tab above. You must note down the client ID and secret to add while installing the OpenG2P modules. -5. Click the _**Client Scopes**_ tab. +5. Click the _**Client Scopes**_ tab. 6. Select the client that you created in the _**Client Scopes**._ 7. Select the _**From Predefined Mappers**_ from the _**Add Mapper**_ drop-down. -8. In the _**Add Predefined Mapper**_ screen, check all the mappers below the _**Name**_ column, and click the _**Add**_ button. -9. After adding predefined mappers, search for the _**Client**_ from the filter, select _**Client Roles,**_ update, and save the below changes. +8. In the _**Add Predefined Mapper**_ screen, select to show all mappers on the same page. Check all the mappers below the _**Name**_ column, and click the _**Add**_ button. +9. Search and remove the "Audience Resolve" mapper from the added mappers list. Click on **Add Mapper** -> **By configuration** and select the **Audience** mapper in the **Configure new mapper** page. Configure the audience mapper with the following details. * Client ID: `select your Client ID from the drop-down` - * Token Claim Name: `client_roles` - * Add to ID token: `ON` - * Add to userinfo: `ON` -10. After the successful creation of the client, you can use this client for the OpenG2P module installation from the Rancher UI. + * Add to Access Token: `ON` . + * Add to ID token: `ON` . +10. After adding predefined mappers, search for "client" in the filter, select _**Client Roles** mapper,_ update, and save the below changes. + * Client ID: `select your Client ID from the drop-down` + * Token Claim Name: `client_roles` + * Add to ID token: `ON` + * Add to userinfo: `ON` +11. After the successful creation of the client, you can use this client for the OpenG2P module installation from the Rancher UI. diff --git a/monitoring-and-reporting/README.md b/monitoring-and-reporting/README.md index eb599fc8..446e967e 100644 --- a/monitoring-and-reporting/README.md +++ b/monitoring-and-reporting/README.md @@ -23,11 +23,11 @@ Monitoring the status of programs and registries is vital for program administra The following tools are provided * [Apache Superset](https://superset.apache.org/) for visual pre-configured **dashboards.** -* [Reporting Framework](reporting-framework.md) for real-time updates and **slicing and dicing of data.** +* [Reporting Framework](reporting-framework/) for real-time updates and **slicing and dicing of data.** * [Logging pipeline](logging.md) for **system logs** monitoring. * [Prometheus and Grafana](system-health.md) for **system health** monitoring. -
Dashboardsapache-superset-dashboard.pngapache-superset.md
Reporting Frameworkreporting-dashboard (1).pngreporting-framework.md
Loggingopensearch-log-dashboard.pnglogging.md
System Healthprometheus-grafana.pngsystem-health.md
+
Dashboardsapache-superset-dashboard.pngapache-superset.md
Reporting Frameworkreporting-dashboard (1).pngreporting-framework
Loggingopensearch-log-dashboard.pnglogging.md
System Healthprometheus-grafana.pngsystem-health.md
diff --git a/monitoring-and-reporting/reporting-framework.md b/monitoring-and-reporting/reporting-framework/README.md similarity index 54% rename from monitoring-and-reporting/reporting-framework.md rename to monitoring-and-reporting/reporting-framework/README.md index fbcb81a2..6c608e0c 100644 --- a/monitoring-and-reporting/reporting-framework.md +++ b/monitoring-and-reporting/reporting-framework/README.md @@ -32,35 +32,15 @@ The salient features of the framework are the following: ## Installation -Reporting framework is installed as part of modules' installation via the Helm chart that installs the respective module. Note that during installation you need to specify the Github location and branch for both the Debezium and Kafka connectors. For example: [https://github.com/OpenG2P/openg2p-reporting/tree/develop/scripts/social-registry](https://github.com/OpenG2P/openg2p-reporting/tree/develop/scripts/social-registry) - -If you would like to update these connectors for your dashboards, update the files on Github. - -## Post installation check - -To ensure that all Kafka connectors are working login into Kafka UI (domain name is set during installation) and check the connectors' status. - -
- - - - - -## Configuring the pipeline for specific dashboards - -### Debezium connector - -* Inspect the Debezium connector for fields that are shunted to OpenSearch. See example connector: [https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/social-registry/debezium-connectors/default.json](https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/social-registry/debezium-connectors/default.json) -* Carefully inspect the `column.exclude.list` field -- make sure you add the fields from Social Registry that must NOT be indexed. Specifically, PII fields like name, address, phone number etc. As a general rule, fields that are not required for dashboards must be excluded explicitly. -* To see trend data and changes in values of fields based on time, the old data should be preserved. Refer to this guide. (_TBD_) +Refer to [Installation Guide and Post Installation Check](user-guides/installation-and-troubleshooting.md). ## Accessing OpenSearch dashboards * Pick the URL provided during the installation of the Helm chart of the module (like SR, PBMS) -* Add Keycloak roles to the user who is accessing the dashboard (as given [here](../social-registry/deployment/#post-installation)). +* Add Keycloak roles to the user who is accessing the dashboard (as given [here](user-guides/installation-and-troubleshooting.md#assigning-roles-to-users)). * Confirm that the number of indexed records in OpenSearch matches the number of rows in the DB (_guide TBD_). This check confirms that the reporting pipeline is working fine. ## Creating dashboards -* On OpenSearch Dashboard, create an Index Pattern and create dashboards. [Learn more>>](https://opensearch.org/docs/latest/dashboards/dashboard/index/) -* If you have relational queries across tables, the connectors need to be written in a certain way. Refer to this guide. _(TBD)_ +* [Create connectors](user-guides/connector-creation-guide.md). +* [Create Dashboards](user-guides/dashboards-creation-guide.md). diff --git a/monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md b/monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md new file mode 100644 index 00000000..59d720f2 --- /dev/null +++ b/monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md @@ -0,0 +1,94 @@ +# Kafka Connect Transform Reference + +This document is the configuration reference guide for Kafka SMTs developed by OpenG2P, that can be used on [OpenSearch Sink Connectors](https://github.com/OpenG2P/openg2p-reporting). + +Following is a list of some of the other transformations available on the OpenSearch Connectors, apart from the ones developed by OpenG2P: + +* [Apache Kafka Connect SMTs](https://kafka.apache.org/documentation/#connect\_included\_transformation). +* [Debezium Kafka Connect Transformations](https://debezium.io/documentation/reference/stable/transformations/index.html). + +## Transformations + +### DynamicNewField + +#### Class name: + +* `org.openg2p.reporting.kafka.connect.DynamicNewField$Key` - Applies transform only to the _Key_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.DynamicNewField$Value` - Applies transform only to the _Value_ of Kafka Connect Record. + +#### Description: + +* This transformation can be used to query external data sources to retrieve new fields and add them to the current record, based on the values of some existing fields of this record. +* Currently, only Elasticsearch-based queries are supported. This means any index on Elasticsearch(or OpenSearch) can be queried and some new fields can be populated based on fields from the current record. + * Some selected from the current record will be taken. ES will be queried for records where the selected field values match. The top response will be picked. Fields from that response can be added back to the current record. + +#### Configuration: + +
Field nameField titleDescriptionDefault Value
query.typeQuery Type

This is the type of query made to retrieve new field values.

Supported values:

  • es (Elasticsearch based).
es
input.fieldsInput Fields

List of comma-separated fields that will be considered as input fields in the current record.

Nested input fields are supported, like: (where profile is json that contains name and birthdate fields)

profile.name,profile.birthdate
+
output.fieldsOutput FieldsList of comma-separated fields to be added to this record.
input.default.valuesInput Default ValuesList of comma-separated values to give in place of the input fields when an input field is empty or null.
Length of this has to match that of input.fields.
es.indexES IndexElasticsearch(or OpenSearch) index to query for.
es.input.fieldsES Input FieldsList of comma-separated fields, to be queried on the ES index, each of which maps to the fields on input.fields.
Length of this has to match that of input.fields.
es.output.fieldsES Output FieldsList of comma-separated fields, to be retrieved from the ES query response document, each of which maps to the fields on output.fields.
Length of this has to match that of output.fields.
es.input.query.add.keywordES Input Query Add KeywordWhether or not to add .keyword to the es.input.fields during the term query. Supported values: true / false .false
es.security.enabledES Security EnabledIf this value is given as true, then Security is enabled on ES.
es.urlES UrlElasticsearch/OpenSearch base URL.
es.usernameES Username
es.passwordES Password
+ +### StringToJson + +#### Class name: + +* `org.openg2p.reporting.kafka.connect.StringToJson$Key` - Applies transform only to the _Key_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.StringToJson$Value` - Applies transform only to the _Value_ of Kafka Connect Record. + +#### Description: + +* This transformation can be used to convert JSON string, present in a field in the record, to JSON. Example: + + ```json + {"profile": "{\"name\":\"Temp\"}"} -> {"profile": {"name": "Temp"}} + ``` +* Currently, this transform only works in schemaless mode. (`value.converter.schemas.enable=false`). + +#### Configuration + +
Field nameField titleDescriptionDefault Value
input.fieldInput FieldInput Field that contains JSON string.
+ +### TimestampConverterAdv + +#### Class name: + +* `org.openg2p.reporting.kafka.connect.TimestampConverterAdv$Key` - Applies transform only to the _Key_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.TimestampConverterAdv$Value` - Applies transform only to the _Value_ of Kafka Connect Record. + +#### Description: + +* This transformation can be used to convert a Timestamp present in a field in the record, to another format. Example: + + ```json + {"create_date": 1723667415} -> {"profile": "2024-08-14'T'20:30:50.069'Z'"} + ``` +* Currently, the output can only be in the form of a string. + +#### Configuration + +
Field nameField titleDescriptionDefault Value
fieldInput FieldInput Field that contains the Timestamp.
input.typeInput Type

Supported values:

  • milli_sec (Input is present as milliseconds since epoch)
  • micro_sec (Input is present as microseconds since epoch. Useful for converting Datetime field of PostgreSQL)
  • days_epoch (Input is present as days since epoch. Useful for converting Date field of PostgreSQL)
milli_sec
output.typeOutput Type

Supported values:

  • string (Gives output as string)
string
output.formatOutput FormatFormat of string output

yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
+
+ +### TimestampSelector + +#### Class name: + +* `org.openg2p.reporting.kafka.connect.TimestampSelector$Key` - Applies transform only to the _Key_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.TimestampSelector$Value` - Applies transform only to the _Value_ of Kafka Connect Record. + +#### Description: + +* This transformation can be used to create a new timestamp field, whose value can be selected from other fields, in the order of whichever is not empty first. Example: (when `ts.order` is `profile.write_date,profile.create_date`) + + ```json + {"profile": {"create_date": 7415, "write_date": null}} -> {"@timestamp_gen": 7415, "profile": {"create_date": 7415, "write_date": null}} + {"profile": {"create_date": 2945, "write_date": 3442}} -> {"@timestamp_gen": 3442, "profile": {"create_date": 2945, "write_date": 3442}} + ``` + +#### Configuration + +
Field nameField titleDescriptionDefault Value
ts.orderTimestamp orderList of comma-separated fields to select output from. The output will be selected based on whichever field in the order is not null first. Nested fields are supported.
output.fieldOutput FieldName of the output field into which the selected timestamp is put.

@ts_generated
+
+ +## Source Code + +[https://github.com/OpenG2P/openg2p-reporting/tree/develop/opensearch-kafka-connector](https://github.com/OpenG2P/openg2p-reporting/tree/develop/opensearch-kafka-connector) diff --git a/monitoring-and-reporting/reporting-framework/user-guides/README.md b/monitoring-and-reporting/reporting-framework/user-guides/README.md new file mode 100644 index 00000000..5bf4dc3c --- /dev/null +++ b/monitoring-and-reporting/reporting-framework/user-guides/README.md @@ -0,0 +1,2 @@ +# 📔 User Guides + diff --git a/monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md b/monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md new file mode 100644 index 00000000..1ef08dc2 --- /dev/null +++ b/monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md @@ -0,0 +1,177 @@ +# Connector Creation Guide + +Creating Dashboards for Reporting involves the following steps: + +* Understanding what database tables are required to be indexed to OpenSearch for the dashboard. +* Creating a pipeline for the data flow to OpenSearch. This pipeline involves: + * Creating one Debezium Connector containing the database table. + * Creating one OpenSearch Connector for the database table. +* Creating a dashboard on OpenSearch + +Follow the guides on this page to learn more about each step in the process above. + +This document contains instructions for the developers (or dashboard creators) on creating the required connectors and dashboards to visualize reports on OpenSearch. + +Follow the [Installation guide](installation-and-troubleshooting.md) to install/update the connector configuration. + +## Prerequisites + +* Create a GitHub repository (or create a new directory in an existing repository) which is going to store the configuration for the connectors and the dashboards for OpenSearch. +* Create a directory in the repository with these three folders `debezium-connectors` , `opensearch-connectors` and `opensearch-dashboards` . +* For example [https://github.com/OpenG2P/openg2p-reporting/tree/develop/scripts/social-registry](https://github.com/OpenG2P/openg2p-reporting/tree/develop/scripts/social-registry). +* Identify the tables from the database whose data will be required for the reports. + +## Debezium connector creation + +* One debezium connector is sufficient for indexing all the required tables of one database. So create one connector for each database (rather than one for each table). +* Create a json file in the `debezium-connectors` . Each json file corresponds to one debezium connector. With the following contents: + + ```json + { + "name": "${DB_PREFIX_INDEX}_${DB_NAME}", + "config": { + "connector.class": "io.debezium.connector.postgresql.PostgresConnector", + "plugin.name": "pgoutput", + "publication.autocreate.mode": "filtered", + "slot.name": "dbz_${DB_PREFIX_INDEX}_${DB_NAME}", + "publication.name": "dbz_pub_${DB_PREFIX_INDEX}_${DB_NAME}", + "database.hostname": "${DB_HOSTNAME}", + "database.port": "${DB_PORT}", + "database.user": "${DB_USER}", + "database.password": "${DB_PASS}", + "database.dbname": "${DB_NAME}", + "topic.prefix": "${DB_PREFIX_INDEX}", + "table.include.list": "", + "column.exclude.list": "", + "heartbeat.interval.ms": "${DEFAULT_DEBEZIUM_CONNECTOR_HEARTBEAT_MS}", + "decimal.handling.mode": "double" + } + } + ``` + +{% hint style="info" %} +Each `$` in the json file will be treated as an environment variable. Environment variables will be automatically picked up during installation. If you want to use a dollar in the file and not parse it as env variable during installation, replace your `$` with `${dollar}` . +{% endhint %} + +* Add the list of all tables required from this database into the `table.include.list` field (in no particular order) (Accepts regex). For example + + ```json + "table.include.list": "public.res_partner,public.g2p_program_membership,public.g2p_programs" + ``` + + * This list needs to include relationship tables of the current table. For example: if you want to index `g2p_program_membership` but would also like to retrieve the name of the program in which the beneficiary belongs, then you have to add `g2p_program` as well. +* This will index all the columns into OpenSearch by default. Every column that you don't want to index into OpenSearch has to be explicitly mentioned in the `column.exclude.list` (Accepts regex). For example PII fields like name, phone number, address, etc. As a general rule, fields that are not required for dashboards must be excluded explicitly. + + ```json + "column.exclude.list": "public.res_partner.name,public.res_partner.phone,public.res_partner.address" + ``` +* Example debezium connector [https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/social-registry/debezium-connectors/default.json](https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/social-registry/debezium-connectors/default.json) +* [Debezium PostgreSQL Connector](https://debezium.io/documentation/reference/stable/connectors/postgresql.html) Reference. + +## OpenSearch connector creation + +* Each json file in the `opensearch-connectors` folder will be considered a connector. Create one connector file for each table with the following content: + + ```json + { + "name": "res_partner_${DB_PREFIX_INDEX}", + "config": { + "connector.class": "io.aiven.kafka.connect.opensearch.OpensearchSinkConnector", + "connection.url": "${OPENSEARCH_URL}", + "connection.username": "${OPENSEARCH_USERNAME}", + "connection.password": "${OPENSEARCH_PASSWORD}", + "tasks.max": "1", + "topics": "${DB_PREFIX_INDEX}.public.res_partner", + "key.ignore": "false", + "schema.ignore": "true", + "key.converter": "org.apache.kafka.connect.json.JsonConverter", + "value.converter": "org.apache.kafka.connect.json.JsonConverter", + "key.converter.schemas.enable": "true", + "value.converter.schemas.enable": "false", + + "behavior.on.null.values": "delete", + "behavior.on.malformed.documents": "warn", + "behavior.on.version.conflict": "warn", + + "transforms": "keyExtId,valExt1,valExt2,tsconvert01,...", + + "transforms.keyExtId.type": "org.apache.kafka.connect.transforms.ExtractField${dollar}Key", + "transforms.keyExtId.field": "id", + + "transforms.valExt1.type": "org.apache.kafka.connect.transforms.ExtractField${dollar}Value", + "transforms.valExt1.field": "payload", + "transforms.valExt2.type": "org.apache.kafka.connect.transforms.ExtractField${dollar}Value", + "transforms.valExt2.field": "after", + + "transforms.tsconvert01.type": "org.openg2p.reporting.kafka.connect.transforms.TimestampConverterAdv${dollar}Value", + "transforms.tsconvert01.field": "source_ts_ms", + + ... + } + ``` +* Replace `name` with the appropriate table names. +* Replace `topics` field with the name of table. + + ```json + "topics": "${DB_PREFIX_INDEX}.public.g2p_program", + ``` +* Setting `key.ignore` to true will make every change to an entry get indexed as a new entry in OpenSearch. This will be useful when trying to store the history of changes. +* After the base file is configured, you can now add transformations to your connector at the end of the file (denoted by `...` in the above example). Each transformation (SMT) will apply some change to the data or a particular field from the table, before pushing the entry to OpenSearch. +* Add the following transformations to your connector based on the data available in the table. + * For every Datetime field / Date field in the table add the following transform. + + ```json + "transforms.tsconvert02.type": "org.openg2p.reporting.kafka.connect.transforms.TimestampConverterAdv${dollar}Value", + "transforms.tsconvert02.field": "create_date", + "transforms.tsconvert02.input.type": "micro_sec", + ``` + * At the end of all the transformations, add a TimestampSelector transform, which creates a new `@timestamp_gen` field whose value can be selected as any of the available Datetime fields in the table. This will be useful while creating a Dashboard on OpenSearch, where we can use this new `@timestamp_gen` field as the IndexPattern timestamp. + + ```json + "transforms.tsSelect.type": "org.openg2p.reporting.kafka.connect.transforms.TimestampSelector${dollar}Value", + "transforms.tsSelect.ts.order": "write_date,create_date", + "transforms.tsSelect.output.field": "@timestamp_gen" + ``` + * If you want to pull data from another table (which is already indexed into OpenSearch) into this table that the connector is pointing to, use the DynamicNewField transform. For example; `g2p_program_membership` contains the beneficiary list. But the demographic info of the beneficiary is present in `res_partner` table. Say you want to pull gender, and address of the beneficiary, and name of the program that the beneficiary is part of, then create two transforms like this: + + ```json + "transforms.join01.type": "org.openg2p.reporting.kafka.connect.transforms.DynamicNewField${dollar}Value", + "transforms.join01.input.fields": "program_id", + "transforms.join01.output.fields": "program_name", + "transforms.join01.es.index": "${DB_PREFIX_INDEX}.public.g2p_program", + "transforms.join01.es.input.fields": "id", + "transforms.join01.es.output.fields": "name", + "transforms.join01.es.security.enabled": "${OPENSEARCH_SECURITY_ENABLED}", + "transforms.join01.es.url": "${OPENSEARCH_URL}", + "transforms.join01.es.username": "${OPENSEARCH_USERNAME}", + "transforms.join01.es.password": "${OPENSEARCH_PASSWORD}", + + "transforms.join02.type": "org.openg2p.reporting.kafka.connect.transforms.DynamicNewField${dollar}Value", + "transforms.join02.input.fields": "partner_id", + "transforms.join02.output.fields": "beneficiary_gender,beneficiary_address", + "transforms.join02.es.index": "${DB_PREFIX_INDEX}.public.res_partner", + "transforms.join02.es.input.fields": "id", + "transforms.join02.es.output.fields": "gender,address", + "transforms.join02.es.security.enabled": "${OPENSEARCH_SECURITY_ENABLED}", + "transforms.join02.es.url": "${OPENSEARCH_URL}", + "transforms.join02.es.username": "${OPENSEARCH_USERNAME}", + "transforms.join02.es.password": "${OPENSEARCH_PASSWORD}", + ``` + * After configuring all the transforms, add the names of all transforms, in the order in which they have to be applied, in the `transforms` field. + + ```json + "transforms": "keyExtId,valExt1,valExt2,tsconvert01,tsconvert02,tsSelect", + ``` + +{% hint style="info" %} +Each `$` in the json file will be treated as an environment variable. Environment variables will be automatically picked up during installation. If you want to use a dollar in the file and not parse it as env variable during installation, replace your `$` with `${dollar}` . +{% endhint %} + +* Example OpenSearch Connector [https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/pbms/opensearch-connectors/30.g2p\_program\_membership.json](https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/pbms/opensearch-connectors/30.g2p\_program\_membership.json) +* For more info on basic connector configuration, refer to [Apacha Kafka Connect](https://kafka.apache.org/documentation/#connect). +* For detailed transform configuration, refer to [Apache Kafka Connect Transformations](https://kafka.apache.org/documentation/#connect\_transforms) doc. +* For a list of all available SMTs and their configs, refer to [Reporting Kafka Connect Transforms](../kafka-connect-transform-reference.md). + +## OpenSearch dashboard creation + +Refer to [OpenSearch Dashboard Creation Guide](dashboards-creation-guide.md). diff --git a/monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md b/monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md new file mode 100644 index 00000000..04a9f6ed --- /dev/null +++ b/monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md @@ -0,0 +1,20 @@ +# Dashboards Creation Guide + +This document contains instructions for the developers (or dashboard creators) to create dashboards to visualize data on OpenSearch. + +## Prerequisites + +* OpenSearch and Reporting are installed and user roles are allocated for being able to access OpenSearch. + +## Procedure + +* Go to OpenSearch Dashboards -> Dashboard Management -> Index Pattern. Create an Index Pattern with following parameters: + * Index Pattern Name : schema name + table name with wildcard to match all environments. Example `*.public.res_partner*` or `*.public.g2p_program_membership` . + * Timestamp field: `@timestamp_gen` . +* Go to Discover and select the Index Pattern (from the menu on the top left) to look at the data present in OpenSearch. +* Go to Visualization (or Visualize menu), and create all the visualizations as per your requirement with appropriate names. Each visualization corresponds to one graph/metric/chart. +* Go to Dashboards. Create a Dashboard, and add all the visualization created before to this dashboard. Each Dashboard is a collection of visualizations. Lay out the position and size of the visualizations on the Dashboard and save it. +* Export Dashboard from Saved Object menu (include related objects while exporting). + +[Learn more>>](https://opensearch.org/docs/latest/dashboards/dashboard/index/) + diff --git a/monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md b/monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md new file mode 100644 index 00000000..e99bfb88 --- /dev/null +++ b/monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md @@ -0,0 +1,39 @@ +# Installation & Troubleshooting + +## Installation + +Reporting framework is installed as part of the modules' installation via the Helm chart that installs the respective module. Note that during installation you need to specify the GitHub Repository URL and branch and directory that contains the Debezium and OpenSearch connectors. For example: + +[https://github.com/OpenG2P/openg2p-reporting/tree/develop/scripts/social-registry](https://github.com/OpenG2P/openg2p-reporting/tree/develop/scripts/social-registry) + +Follow this guide to [creating/updating connectors](connector-creation-guide.md). + +#### Assigning roles to users + +Create[ Keycloak client roles](https://www.keycloak.org/docs/latest/server\_admin/#con-client-roles\_server\_administration\_guide) for the following components and assign them to users: + +
ComponentRole name
OpenSearch Dashboards for Reporting admin
Kafka UI for ReportingAdmin
+ +## Post-installation check + +To ensure that all Kafka connectors are working login into Kafka UI (domain name is set during installation) and check the connectors' status. + +
+ +## Update Connectors + +This procedure doesn't update the data present in OpenSearch, it only updates the connector configs, so only the new and incoming data is affected. + +* After making changes to connectors/dashboards in your GitHub Repo, go to the Installed Apps section on Rancher and upgrade your module, SR/PBMS, etc. (without changing any helm values). +* When the upgrade finishes the new reporting connector changes are automatically applied to the connectors. Log in to Kafka UI and check whether the connector config has been updated. + +## Cleanup and uninstall + +This describes steps to clean up the connectors and the data so that fresh connectors can be installed again. + +* Log in to Kafka UI -> Kafka Connect Section, and delete all the connectors. +* Delete all the topics related to the connectors as well. +* Log in to OpenSearch -> Index Management, and delete all the relevant indices. +* Delete _replication slots_ and _publication_ on Postgres. + +If you want to install the connectors again, follow the [Update](installation-and-troubleshooting.md#update-connectors) guide. diff --git a/social-registry/deployment/README.md b/social-registry/deployment/README.md index e76403fd..4ebc0456 100644 --- a/social-registry/deployment/README.md +++ b/social-registry/deployment/README.md @@ -80,7 +80,7 @@ image: Create[ Keycloak client roles](https://www.keycloak.org/docs/latest/server\_admin/#con-client-roles\_server\_administration\_guide) for the following components and assign them to users: -
ComponentRole name
OpenSearch Dashboards for loggingadmin
OpenSearch Dashboards for Reporting admin
Kafka UI for ReportingAdmin
Apache SupersetAdmin
Minio Console consoleAdmin
+
ComponentRole name
OpenSearch Dashboards for loggingadmin
OpenSearch Dashboards for Reporting admin
Kafka UI for ReportingAdmin
Apache SupersetAdmin
Minio Console consoleAdmin
#### Assigning roles to clients