arrow-left

All pages
gitbookPowered by GitBook
1 of 50

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Local Development

Generic Logstash pipeline for ELK stack.

hashtag
Adding pipelines and configs

To add Logstash config files, you can add files into the <path to project packages>/data-mapper-logstash/pipeline.

hashtag
Developing the Logstash configs locally

When seeking to make changes to the Logstash configs without having to repeatedly start and stop the service, one can set the LOGSTASH_DEV_MOUNT env var in your .env file to true to attach the service's config files to those on your local machine.

hashtag
Cluster

When attaching Logstash to an Elasticsearch cluster ensure you use the ES_HOSTS environment variable. eg. ES_HOSTS="analytics-datastore-elastic-search-1:9200","analytics-datastore-elastic-search-2:9200","analytics-datastore-elastic-search-3:9200" and reference it in your logstash configs eg. hosts => [$ES_HOSTS]

hashtag
Notes

  • With LOGSTASH_DEV_MOUNT=true, you have to set the LOGSTASH_PACKAGE_PATH variable with the absolute path to package containing your Logstash config files, i.e., LOGSTASH_PACKAGE_PATH=/home/user/Documents/Projects/platform/data-mapper-logstash.

  • WARNING: do not edit the pipeline files from within the Logstash container, or the group ID and user ID will change, and subsequently will result in file permission errors on your local file system.

Environment Variables

Listed in this page are all environment variables needed to run Kafka mapper consumer.

Variable Name
Type
Relevance
Required
Default

KAFKA_HOST

String

Kafka hostname

No

kafka

KAFKA_PORT

Number

Kafka port

No

9092

CLICKHOUSE_HOST

String

Clickhouse hostname

No

analytics-datastore-clickhouse

CLICKHOUSE_PORT

String

Clickhouse port

No

8123

Environment Variables

Listed in this page are all environment variables needed to run Ofelia.

The Ofelia service does not make use of any environment variables. However, when specifying jobs in the config.ini file(s) we can pass any environment variable in.

Example:

[job-run "mongo-backup"]
schedule= @daily
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= true
circle-info

In the example above, OPENHIM_MONGO_URL is an environment variable.

Environment Variables

Listed in this page are all environment variables needed to run Jsreport.

Variable Name
Type
Relevance
Required
Default

JS_REPORT_LICENSE_KEY

String

Service license key

Yes

JS_REPORT

String

Jsreport service password

No

dev_password_only

JS_REPORT_USERNAME

String

Jsreport service username

No

admin

JS_REPORT_SECRET

String

Secret password for the authentication of a cookie session related to the extension used in Jsreport

No

dev_secret_only

ES_HOSTS

String

Elasticsearch connection string

No

analytics-datastore-elastic-search:9200

ES_PASSWORD

String

Elasticsearch password (for request authentication)

No

dev_password_only

ES_USERNAME

String

Elasticsearch username (for request authentication

No

elastic

JS_REPORT_INSTANCES

Number

Number of service replicas

No

1

JS_REPORT_SSL

Boolean

SSL protocol requirement

No

false

JS_REPORT_CONFIG_FILE

String

Path to the service import file

No

export.jsrexport

JS_REPORT_DEV_MOUNT

Boolean

Dev mount mode enabling flag

No

false

JS_REPORT_PACKAGE_PATH

String

Local path to package

Yes if JS_REPORT_DEV_MOUNT is set to true

JS_REPORT_CPU_LIMIT

Number

CPU usage limit

No

0

JS_REPORT_MEMORY_LIMIT

String

RAM usage limit

No

3G

JS_REPORT_CPU_RESERVE

Number

Reserved CPU

No

0.05

JS_REPORT_MEMORY_RESERVE

String

Reserved RAM

No

500M

Environment Variables

Listed in this page are all environment variables needed to run Superset.

Variable Name
Type
Relevance
Required
Default

SUPERSET_USERNAME

String

Service username

No

admin

SUPERSET_FIRSTNAME

String

Admin account first name

No

SUPERSET

SUPERSET_LASTNAME

String

Admin account last name

No

ADMIN

SUPERSET_EMAIL

String

Admin account email address

No

admin@superset.com

SUPERSET_PASSWORD

String

Admin account password

No

admin

SUPERSET_API_USERNAME

String

Service username

No

admin

SUPERSET_API_PASSWORD

String

Service password

No

admin

SUPERSET_SSL

Boolean

SSL protocol requirement

No

False

CONFIG_FILE

String

Path to the dashboard import file

No

superset-export.zip

Local Development

A FHIR compliant server for the platform.

hashtag
Instant OpenHIE FHIR Data Store Component

This component consists of two services:

  • Postgres

  • HAPI FHIR Server -

hashtag
Accessing the services

hashtag
HAPI FHIR

This service is accessible for testing via:

In a publicly accessible deployment this port should not be exposed. The OpenHIM should be used to access HAPI-FHIR.

hashtag
Testing the HAPI FHIR Component

For testing this component we will be making use of curl for sending our request, but any client could be used to achieve the same result.

Execute the command below

HAPI FHIRarrow-up-right
http://127.0.0.1:3447arrow-up-right
curl http://127.0.0.1:3447/fhir/Patient

Local Development

Elasticsearch is the datastore for the Elastic (ELK) Stack

hashtag
Launching

Launching this package follows different steps:

  • [Cluster mode] Creating certificates and configuring the nodes

  • Running Elasticsearch

  • Setting Elasticsearch passwords

  • Importing Elasticsearch index

hashtag
Importing

To initialize the index mapping in Elasticsearch, a helper container is launched to import a config file to Elasticsearch. The config importer looks for a field named fhir-enrich-report.json in <path to project packages>/analytics-datastore-elastic-search/importer.

The file fhir-enrich-report.json will contain the mapping of the index fhir-enrich-reports.

Elasticsearch will create a dynamic mapping for the incoming data if we don't specify one, this dynamic mapping may cause issues when we start sending the data as it doesn't necessarily conform 100% to the data types that we're expecting when querying the data out of Elasticsearch again.

Therefore, the mapping should be initialized in Elasticsearch using the config importer.

The file fhir-enrich-report.json is just an example, the name and the mapping can be overridden.

hashtag
Running in Dev Mode

When running in DEV mode, Elasticsearch is reachable at:

http://127.0.0.1:9201/

hashtag
Elasticsearch Backups

For detailed steps about creating backups see: .

Elasticsearch offers the functionality to save a backup in different ways, for further understanding, you can use this link: .

hashtag
Elasticsearch Restore

To see how to restore snapshots in Elasticsearch: .

Snapshot filesystem repository docsarrow-up-right
Register a snapshot repository docsarrow-up-right
Snapshot Restore docsarrow-up-right

Local Development

The Interoperability Layer is the base of the Platform architecture.

hashtag
Accessing the services

OpenHIM

  • Console: http://127.0.0.1:9000arrow-up-right

  • Username: root@openhim.org

  • Password: instant101

hashtag
Testing the Interoperability Component

As part of the Interoperability Layer setup we also do some initial config import for connecting the services together.

  • OpenHIM: Import a channel configuration that routes requests to the Data Store - HAPI FHIR service

This config importer will import channels and configuration according to the file openhim-import.json in the folder <path to project packages>/interoperability-layer-openhim/importer/volume.

Kafka Mapper Consumer

A Kafka consumer that maps FHIR resources to a flattened data structure.

Data Mapper Logstash

Generic Logstash pipeline for ELK stack.

Logstash provides a data transformation pipeline for analytics data. In the platform it is responsible for transforming FHIR messages into a flattened object that can be inserted into Elasticsearch.

hashtag
Input

Logstash allows for different types of input to read the data: Kafka, HTTP ports, files, etc.

hashtag
Filters

With a set of filters and plugins, the data can be transformed, filtered, and conditioned.

This allows the creation of a structured and flattened object out of many nested and long resources.

Accessing the different fields will be much easier and we will get rid of the unused data.

hashtag
Output

To save the data, Logstash provides a set of outputs such as: Elasticsearch, S3, files, etc.

Environment Variables

Listed in this page are all environment variables needed to run Kibana.

Variable Name
Type
Relevance
Required
Default

ES_KIBANA_SYSTEM

String

ElasticSearch auth username

Yes

KIBANA_INSTANCES

Environment Variables

Listed in this page are all environment variables needed to run Monitoring package.

Variable Name
Type
Relevance
Required
Default

GF_SECURITY_ADMIN_USER

String

Username of Grafana service

No

admin

Local Development

A job scheduling tool.

hashtag
Ofelia - Job Scheduler

Job Docsarrow-up-right

The platform uses image: mcuadros/ofelia:v0.3.6 which has the following limitations:

  • Ofelia does not support config.ini files when run in docker mode (which enables scheduling jobs with docker labels) thus we need to always use the config.ini file for creating jobs.

  • Ofelia does not support attaching to a running instance of a service.

  • Ofelia does not support job-run (which allows you to launch a job with a specified image name) labels on non-ofelia services (ie. you may not specify a job of type job-run within the nginx package as ofelia will not pick it up)

  • Ofelia only initializes jobs when it stands up and does not listen for new containers with new labels to update it's schedules, thus Ofelia needs to be re-up'd every time a change is made to a job that is configured on another service's label.

hashtag
Example of a job config

An example of job config in the file config.example.ini existing in the folder <path to project packages>/job-scheduler-ofelia/.

circle-info

You can specify multiple jobs in a single file.

Environment Variables

Listed in this page are all environment variables needed to run Logstash.

Variable Name
Type
Relevance
Required
Default

LOGSTASH_INSTANCES

Number

Number of service replicas

No

Local Development

Reverse proxy for secure and insecure nginx configurations.

hashtag
Nginx Reverse Proxy

This package can be used to secure all of the data transfered to and from services using SSL encryption and also to generate SSL certificates as well.

Instead of configuring each package separately, we're using this package that will hold all of the Nginx configuration.

It will generate Staging or Production certificates from Let's Encrypt to ensure a secure connection (in case we require SSL to be enabled).

It is responsible for routing network traffic to the correct service.

hashtag
Structure of Reverse Proxy Nginx package

The current package contains the following:

  • config: A folder that contains the general Nginx config for secure and insecure mode.

  • package-conf-insecure: A folder that contains all the insecure configs related to the services that need outside access.

  • package-conf-secure

A job using Ofelia exists to renew the certificates automatically based on the certificate renewal period.

circle-info

Adding new packages that require external access will require adding the Nginx config needed in this package.

: A folder that contains all the secure configs related to the services that need outside access.

Number

Number of service replicas

No

1

KIBANA_YML_CONFIG

String

Path to the service configuration file

No

kibana-kibana.yml

KIBANA_USERNAME

String

Service username

No

elastic

KIBANA_PASSWORD

String

Service password

No

dev_password_only

KIBANA_SSL

Boolean

SSL protocol requirement

No

True

KIBANA_CONFIG_FILE

String

Path to the dashboard import file

No

kibana-export.ndjson

KIBANA_MEMORY_LIMIT

String

RAM usage limit

No

3G

KIBANA_MEMORY_RESERVE

String

Reserved RAM

No

500M

GF_SECURITY_ADMIN_PASSWORD

String

Password of Grafana service

No

dev_password_only

1

LOGSTASH_DEV_MOUNT

Boolean

DEV mount mode enabling flag

No

false

LOGSTASH_PACKAGE_PATH

String

Logstash package absolute path

yes if LOGSTASH_DEV_MOUNT is true

LS_JAVA_OPTS

String

JVM heap size, it should be no less than 4GB and no more than 8GB (maximum of 50-75% of total RAM)

No

-Xmx2g -Xms2g

ES_ELASTIC

String

ElasticSearch Logstash user password

Yes

dev_password_only

ES_HOSTS

String

Elasticsearch connection string

Yes

analytics-datastore-elastic-search:9200

KIBANA_SSL

Boolean

SSL protocol requirement

No

True

LOGSTASH_MEMORY_LIMIT

String

RAM usage limit

No

3G

LOGSTASH_MEMORY_RESERVE

String

Reserved RAM

No

500M

[job-run "renew-certs"]
schedule = @every 1440h ;60 days
image = jembi/swarm-nginx-renewal:v1.0.0
volume = renew-certbot-conf:/instant
volume = /var/run/docker.sock:/var/run/docker.sock:ro
environment = RENEWAL_EMAIL=${RENEWAL_EMAIL}
environment = STAGING=${STAGING}
environment = DOMAIN_NAME=${DOMAIN_NAME}
environment = SUBDOMAINS=${SUBDOMAINS}
environment = REVERSE_PROXY_STACK_NAME=${REVERSE_PROXY_STACK_NAME}
delete = true

Monitoring

A package for monitoring the platform services

The monitoring package sets up services to monitor the entire deployed stack. This includes the state of the servers involved in the docker swarm, the docker containers themselves and particular applications such as Kafka. It also captures the logs from the various services.

This monitoring package uses:

  • Grafana: for dashboards

  • Prometheus: for recording metrics

  • Cadvisor: for reading docker container metrics

  • Loki: for storing logs

  • Node Exporter: for monitoring host machine metrics like CPU, memory etc

To use the monitoring services, include the monitoring package id to your list of package ids when standing up the platform.

hashtag
Adding application specific metrics

The monitoring service utilises service discovery to discover new metric endpoints to scrape.

To use custom metrics for an application, first configure that application to provide a . Then, let the monitoring service know about it by configuring specific docker service labels that tell the monitoring service to add a new endpoint to scrape. E.g. see lines 8-9:

prometheus-job-service lets Prometheus know to enable monitoring for this container and prometheus-address gives the endpoint that Prometheus can access the metrics on. By default this is assumed to be at the path /metrics by Prometheus.

By using the prometheus-job-service label prometheus will only create a single target for your application even if it is replicated via service config in docker swarm. If you would like to monitor each replica separately (i.e. if metrics are only captured for that replica and not shared to some central location in the application cluster) you can instead used the prometheus-job-task label and Prometheus will create a target for each replica.

A full list od supported labels are listed below:

  • prometheus-job-service - indicates this service should be monitored

  • prometheus-job-task - indicates each task in the replicated service should be monitored separately

  • prometheus-address

All services must also be on the prometheus_public network to be able to be seen by Prometheus for metrics scraping.

hashtag
Adding additional dashboards

To add additional dashboards simply use docker configs to add new Grafana dashboard json files into this directory in the Grafana container: /etc/grafana/provisioning/dashboards/

That directory will be scanned periodically and new dashboards will automatically be added to Grafana.

Grafana dashboard json file may be exported directly from the Grafana when saving dashboards or you may lookup the many existing dashboard in the .

- the service address Prometheus can scrape metrics from, can only be used with
prometheus-job-service
  • prometheus-scheme - the scheme to use when scaping a task or service (e.g. http or https), defaults to http

  • prometheus-metrics-path - the path to the metrics endpoint on the target (defaults to /metrics)

  • prometheus-port - the port of the metrics endpoint. Only usable with prometheus-job-task, defaults to all exposed ports for the container if no label is present

  • Prometheus compatible metrics endpointarrow-up-right
    Grafana marketplacearrow-up-right
      kafka-minion:
        image: quay.io/cloudhut/kminion:master
        hostname: kafka-minion
        environment:
          KAFKA_BROKERS: kafka:9092
        deploy:
          labels:
            - prometheus-job-service=kafka
            - prometheus-address=kafka-minion:8080

    Packages

    The OpenHIM Platform includes a number of base packages which are useful for supporting Health Information Exchanges Workflows. Each section below describes the details of these packages.

    Package can be stood up individually using the instant package init -n <package_name> command, or they can be included in your own recipes. This can be accomplished by that includes the necessary packages and any custom configuration packages.

    Local Development

    A Kafka consumer that maps FHIR resources to a flattened data structure

    hashtag
    Kafka-mapper-consumer

    A Kafka processor that will consume messages from Kafka topics. This messages will be mapped according to the mapping defined in the file called fhir-mapping.json.

    This flattened data will be then sent to Clickhouse DB to be stored.

    Each topic has its own table mapping, plugin and filter and one topic may be mapped in different ways.

    An example of fhir-mapping.json can be found in the package.

    Each new message with new ID will be inserted as a new row in the table defined in the mapping. An update of the message will result on update in Clickhouse DB accordingly. Link to GitHub repo: .

    https://github.com/jembi/kafka-mapper-consumerarrow-up-right

    Environment Variables

    Listed in this page are all environment variables needed to run Clickhouse.

    Variable Name
    Type
    Relevance
    Required
    Default

    CLICKHOUSE_HOST

    String

    The service name (host) of Clickhouse

    No

    analytics-datastore-clickhouse

    CLICKHOUSE_PORT

    Number

    The port that the service of Clickhouse is exposed to

    No

    8123

    Environment Variables

    Listed in this page are all environment variables needed to run and initialize Elasticsearch.

    Variable Name
    Type
    Relevance
    Required
    Default

    ES_ELASTIC

    String

    Elasticsearch super-user password

    Yes

    dev_password_only

    ES_KIBANA_SYSTEM

    String

    The password for the user Kibana used to connect and communicate with Elasticsearch

    Yes

    dev_password_only

    ES_LOGSTASH_SYSTEM

    String

    The password for the user Logstash used to map and transform the data before storing it in Elasticsearch

    Yes

    dev_password_only

    ES_BEATS_SYSTEM

    String

    The password for the user the Beats use when storing monitoring information in Elasticsearch

    Yes

    dev_password_only

    ES_REMOTE_MONITORING_USER

    String

    The password for the user Metricbeat used when collecting and storing monitoring information in Elasticsearch. It has the remote_monitoring_agent and remote_monitoring_collector built-in roles

    Yes

    dev_password_only

    ES_APM_SYSTEM

    String

    The password for the user of the APM server used when storing monitoring information in Elasticsearch

    Yes

    dev_password_only

    ES_LEADER_NODE

    String

    Specify the leader service name (the service name in case single mode and the leader service name in case cluster mode)

    This is used for the config importer. Specifying the service name to initialize the mapping inside Elasticsearch

    Yes

    analytics-datastore-elastic-search

    ES_HEAP_SIZE

    String

    The heap size is the amount of RAM allocated to the Java Virtual Machine of a node in Elasticsearch

    It should be set -Xms and -Xmx to the same value (50% of the total available RAM to a maximum of 31GB)

    No

    -Xms2048m -Xmx2048m

    ES_SSL

    Boolean

    This variable is used only for the config importer of Elasticsearch (internal connection between docker services the elastic and the importer)

    No

    false

    ES_MEMORY_LIMIT

    String

    RAM usage limit of Elasticsearch service

    No

    3G

    ES_MEMORY_RESERVE

    String

    Reserved RAM for Elasticsearch service

    No

    500M

    ES_PATH_REPO

    String

    The path to the repository in the container to store Elasticsearch backup snapshots

    No

    /backups/elasticsearch

    Analytics Datastore - Clickhouse

    Clickhouse is a SQL datastore.

    Environment Variables

    Listed in this page are all environment variables needed to run Kibana.

    Variable Name
    Type
    Relevance
    Required
    Default

    SANTEMPI_INSTANCES

    Number

    Number of service replicas

    No

    1

    SANTEMPI_MAIN_CONNECTION_STRING

    hashtag
    Note

    The environment variable SANTEMPI_REPMGR_PARTNER_NODES will differ from cluster and single mode.

    Default value for SANTEMPI_MAIN_CONNECTION_STRING:

    Default value for SANTEMPI_AUDIT_CONNECTION_STRING:

    Local Development

    hashtag
    Accessing the Service

    Jsreport -

    hashtag

    String

    Connection string to SanteMPI

    No

    Check below table

    SANTEMPI_AUDIT_CONNECTION_STRING

    String

    Audit connection string to SanteMPI

    No

    Check below table

    SANTEMPI_POSTGRESQL_PASSWORD

    String

    SanteMPI postgreSQL password

    No

    SanteDB123

    SANTEMPI_POSTGRESQL_USERNAME

    String

    SanteMPI postgreSQL username

    No

    santempi

    SANTEMPI_REPMGR_PRIMARY_HOST

    String

    SanteMPI postgreSQL replicas manager primary host

    No

    santempi-psql-1

    SANTEMPI_REPMGR_PARTNER_NODES

    String

    SanteMPI postgreSQL replicas manager nodes hosts

    Yes

    santempi-psql-1,santempi-psql-2,santempi-psql-

    server=santempi-psql-1;port=5432; database=santedb; user id=santedb; password=SanteDB123; pooling=true; MinPoolSize=5; MaxPoolSize=15; Timeout=60;
    server=santempi-psql-1;port=5432; database=auditdb; user id=santedb; password=SanteDB123; pooling=true; MinPoolSize=5; MaxPoolSize=15; Timeout=60;
    Scripts/Templates Development

    When seeking to make changes to the Jsreport scripts/templates without having to repeatedly start and stop the service, one can set the JS_REPORT_DEV_MOUNT environment variable in your .env file to true to attach the service's content files to those on your local machine.

    circle-exclamation
    • You have to run the set-permissions.sh script before and after launching Jsreport when JS_REPORT_DEV_MOUNT=true.

    • REMEMBER TO EXPORT THE JSREXPORT FILE WHEN YOU'RE DONE EDITING THE SCRIPTS. More info is available at

    circle-info
    • With JS_REPORT_DEV_MOUNT=true, you have to set the JS_REPORT_PACKAGE_PATH variable with the absolute path to the Jsreport package on your local machine, i.e., JS_REPORT_PACKAGE_PATH=/home/user/Documents/Projects/platform/dashboard-visualiser-jsreport

    • Remember to shut down Jsreport before changing git branches if JS_REPORT_DEV_MOUNT=true, otherwise, the dev mount will persist the Jsreport scripts/templates across your branches.

    hashtag
    Export & Import

    After editing the templates in Jsreport, you will need to save these changes, it is advised to export a file containing all the changes named export.jsrexport and put it into the folder <path to project packages>/dashboard-visualiser-jsreport/importer.

    The config importer of Jsreport will import the export.jsrexport and then all the templates, assets, and scripts will be loaded in Jsreport.

    http://127.0.0.1:5488/arrow-up-right

    Local Development

    hashtag
    Accessing the Service

    Kibana -

    hashtag

    FHIR Datastore HAPI FHIR

    A FHIR compliant server for the platform.

    The HAPI FHIR service will be used for two mandatory functionalities:

    • A validator of FHIR messages

    • A storage of FHIR message

    hashtag
    A validator

    Incoming messages from an EMR or Postman bundles are not always well structured and it may be missing required elements or be malformed.

    HAPI FHIR will use a FHIR IG to validate these messages.

    It will reject any invalid resources and it will return errors according to the IG.

    HAPI FHIR is the first check to make sure the data injected in the rest of the system conforms to the requirements.

    hashtag
    A storage

    Backed by a PostgreSQL database, all the validated incoming messages will be stored.

    This will allow HAPI FHIR to check for correct links and references between the resources, as well as another storage for backups in case the data is lost.

    https://jsreport.net/learn/import-exportarrow-up-right
    Importing Saved Objects

    The config importer will import the file kibana-export.ndjson that exists in the folder <path to project packages>/dashboard-visualiser-kibana/importer.

    The saved objects that will be imported are the index patterns and dashboards. If you made any changes to these objects please don't forget to export them and save the file kibana-export.ndjson under the folder specified above.

    http://127.0.0.1:5601/arrow-up-right

    Dashboard Visualiser - Jsreport

    Jsreport is a visualisation tool configured to query data from Elasticsearch.

    hashtag

    Environment Variables

    Listed in this page are all environment variables needed to run the Message Bus Kafka.

    Variable Name
    Type
    Relevance
    Required
    Default

    KAFKA_INSTANCES

    Number

    Service replicas

    No

    1

    KAFKA_CPU_LIMIT

    Number

    CPU usage limit

    No

    0

    KAFKA_CPU_RESERVE

    Number

    Reserved CPU

    No

    0.05

    KAFKA_MEMORY_LIMIT

    String

    RAM usage limit

    No

    3G

    KAFKA_MEMORY_RESERVE

    String

    Reserved RAM

    No

    500M

    KAFKA_TOPICS

    String

    Kafka topics

    Yes

    ZOOKEEPER_CPU_LIMIT

    Number

    CPU usage limit

    No

    0

    ZOOKEEPER_CPU_RESERVE

    Number

    Reserved CPU

    No

    0.05

    ZOOKEEPER_MEMORY_LIMIT

    String

    RAM usage limit

    No

    3G

    ZOOKEEPER_MEMORY_RESERVE

    String

    Reserved RAM

    No

    500M

    KMINION_CPU_LIMIT

    Number

    CPU usage limit

    No

    0

    KMINION_CPU_RESERVE

    Number

    Reserved CPU

    No

    0.05

    KMINION_MEMORY_LIMIT

    String

    RAM usage limit

    No

    3G

    KMINION_MEMORY_RESERVE

    String

    Reserved RAM

    No

    500M

    KAFDROP_CPU_LIMIT

    Number

    CPU usage limit

    No

    0

    KAFDROP_CPU_RESERVE

    Number

    Reserved CPU

    No

    0.05

    KAFDROP_MEMORY_LIMIT

    String

    RAM usage limit

    No

    3G

    KAFDROP_MEMORY_RESERVE

    String

    Reserved RAM

    No

    500M

    Message Bus Helper Hapi Proxy

    A helper package for the Kafka message bus.

    A helper for Kafka message bus service, It sends data to the HAPI FHIR datastore and then to the Kafka message bus based on the response from HAPI FHIR.

    More particularly:

    1. It receives messages from OpenHIM

    2. It sends the data to the HAPI FHIR server and waits for the response

    3. It gets the response. According to the response status, it will send the message to the topic that corresponds to that status (2xx, 4xx, 5xx, ... )

    4. It will send back the response from HAPI FHIR to OpenHIM as well

    OpenFn

    hashtag
    Introduction

    Welcome to the documentation for the openfn package! This package is designed to provide a platform for seamless integration and automation of data workflows. Whether you are a developer, data analyst, or data scientist, this package will help you streamline your data processing tasks.

    hashtag
    Usage

    Once you have added the openfn package, you can start using it in your projects. Here is how to instantiate the package

    instant package init -n openfn --dev

    hashtag
    Demo

    To get a hands-on experience with the openfn package, try out the demo. The demo showcases the package's capabilities and provides a sample project used to export data from CDR to NDR with transformations. It utilizes a Kafka queue and a custom adapter to map Bundles to be compliant with the FHIR Implementation Guide (IG).

    hashtag
    Getting Started

    To access the demo, follow these steps:

    1. Visit the website.

    2. Use the following demo credentials

    1. Configure the Kafka trigger Change the trigger type from webhook to “Kafka Consumer” Enter in configuration details → see Kafka topic: {whichever you want to use} (e.g., “cdr-ndr”) Hosts: {cdr host name} Initial offset reset policy: earliest Connection timeout: 30 (default value, but can be adjusted) Warning: Check Disable this trigger to ensure that consumption doesn’t start until you are ready to run the workflow! Once unchecked, it will immediately start consuming messages off the topic.

    hashtag
    Documentation

    For more detailed information on the openfn package and its functionalities, please refer to the . The documentation covers various topics, including installation instructions, usage guidelines, and advanced features.

    Environment Variables

    Listed in this page are all environment variables needed to run Reverse Proxy Nginx.

    Variable Name
    Type
    Relevance
    Required
    Default

    DOMAIN_NAME

    String

    Domain name

    Yes

    localhost

    Environment Variables

    Listed in this page are all environment variables needed to run Hapi-proxy.

    Variable Name
    Type
    Relevance
    Required
    Default

    HAPI_SERVER_URL

    String

    Hapi-fhir server URL

    No

    http://hapi-fhir:8080/fhir

    KAFKA_BOOTSTRAP_SERVERS

    SUBDOMAINS

    String

    Subdomain names

    Yes

    RENEWAL_EMAIL

    String

    Renewal email

    Yes

    REVERSE_PROXY_INSTANCES

    Number

    Number of instances

    No

    1

    STAGING

    String

    Generate fake or real certificate (true for fake)

    No

    false

    NGINX_CPU_LIMIT

    Number

    CPU usage limit

    No

    0

    NGINX_CPU_RESERVE

    Number

    Reserved CPU

    No

    0.05

    NGINX_MEMORY_LIMIT

    String

    RAM usage limit

    No

    3G

    NGINX_MEMORY_RESERVE

    String

    Reserved RAM

    No

    500M

    String

    Kafka server

    No

    kafka:9092

    HAPI_SERVER_VALIDATE_FORMAT

    String

    Path to the service configuration file

    No

    kibana-kibana.yml

    HAPI_PROXY_INSTANCES

    Number

    Number of instances of hapi-proxy

    No

    1

    HAPI_PROXY_CPU_LIMIT

    Number

    CPU usage limit

    No

    0

    HAPI_PROXY_CPU_RESERVE

    Number

    Reserved CPU usage

    No

    0.05

    HAPI_PROXY_MEMORY_LIMIT

    String

    RAM usage limit

    No

    3G

    HAPI_PROXY_MEMORY_RESERVE

    String

    Reserved RAM

    No

    500M

    OpenFn Demoarrow-up-right
    docsarrow-up-right
    official documentationarrow-up-right
    username: root@openhim.org
    password: instant101

    Kafka Unbundler Consumer

    A kafka processor to unbundle resources into their own kafka topics.

    The kafka unbundler will consume resources of topix 2xx from Kafka, split them according to their resource type and send them again to Kafka to new topics.

    Each resource type has its own topic.

    Link for github repo: .

    https://github.com/jembi/kafka-unbundler-consumerarrow-up-right

    Reverse Proxy Nginx

    Reverse proxy for secure and insecure nginx configurations.

    Local Development

    hashtag
    Kafka Topics Configuration

    Using a config importerarrow-up-right, Kafka's topics are imported to Kafka. The topics are specified using the KAFKA_TOPICS environment variable, and must be of syntax:

    topic or topic:partition:replicationFactor

    Using topics 2xx, 3xx, and metrics (partition=3, replicationFactor=1) as an example, we would declare:

    KAFKA_TOPICS=2xx,3xx,metrics:3:1

    where topics are separated by commas.

    hashtag
    Accessing Kafdrop

    Kafdrop -

    Environment Variables

    Listed in this page are all environment variables needed to run hapi-fhir package.

    Variable Name
    Type
    Revelance
    Required
    Default

    REPMGR_PRIMARY_HOST

    String

    Service name of the primary replication manager host (PostgreSQL)

    No

    postgres-1

    Environment Variables

    A kafka processor to unbundle resources into their own kafka topics.

    Variable Name
    Type
    Relevance
    Required
    Default

    KAFKA_HOST

    String

    Kafka hostname

    No

    kafka

    KAFKA_PORT

    Environment Variables

    The following environment variables can be used to configure Traefik:

    Variable
    Value
    Description

    REPMGR_PARTNER_NODES

    String

    Service names of the replicas of PostgreSQL

    Yes

    postgres-1

    POSTGRES_REPLICA_SET

    String

    PostgreSQL replica set (host and port of the replicas)

    Yes

    postgres-1:5432

    HAPI_FHIR_CPU_LIMIT

    Number

    CPU limit usage for hapi-fhir service

    No

    0 (unlimited)

    HAPI_FHIR_CPU_RESERVE

    Number

    Reserved CPU usage for hapi-fhir service

    No

    0.05

    HAPI_FHIR_MEMORY_LIMIT

    String

    RAM limit usage for hapi-fhir service

    No

    3G

    HAPI_FHIR_MEMORY_RESERVE

    String

    Reserved RAM usage for hapi-fhir service

    No

    500M

    HF_POSTGRES_CPU_LIMIT

    Number

    CPU limit usage for postgreSQL service

    No

    0 (unlimited)

    HF_POSTGRES_CPU_RESERVE

    Number

    Reserved CPU usage for postgreSQL service

    No

    0.05

    HF_POSTGRES_MEMORY_LIMIT

    String

    RAM limit usage for postgreSQL service

    No

    3G

    HF_POSTGRES_MEMORY_RESERVE

    String

    Reserved RAM usage for hapi-fhir service

    No

    500M

    HAPI_FHIR_INSTANCES

    Number

    Number of hapi-fhir service replicas

    No

    1

    HF_POSTGRESQL_USERNAME

    String

    Hapi-fhir PostgreSQL username

    Yes

    admin

    HF_POSTGRESQL_PASSWORD

    String

    Hapi-fhir PostgreSQL password

    Yes

    instant101

    HF_POSTGRESQL_DATABASE

    String

    Hapi-fhir PostgreSQL database

    No

    hapi

    REPMGR_PASSWORD

    Strign

    hapi-fhir PostgreSQL Replication Manager username

    Yes

    Number

    Kafka port

    No

    9092

    http://127.0.0.1:9013/arrow-up-right

    Local Development

    Clickhouse is a SQL datastore

    hashtag
    Launching

    Launching this package executes the following two steps:

    • Running Clickhouse service

    • Running config importer to run the initial SQL script

    hashtag
    Initializing ClickHouse

    The config importer will be launched to run a NodeJS script after ClickHouse has started.

    It will run SQL queries to initialize the tables and the schema, and can also include initial seed data if required.

    The config importer looks for two files clickhouseTables.js and clickhouseConfig.js found in <path to project packages>/analytics-datastore-clickhouse/importer/config.

    For specific implementation, this folder can be overridden.

    Job Scheduler Ofelia

    A job scheduling tool.

    Analytics Datastore - Elasticsearch

    Elasticsearch is the datastore for the Elastic (ELK) Stack.

    Running in Clustered Mode

    hashtag
    Pre-Deploy Configuration

    If running in clustered mode, take note that each machine has to have the following vm.max_map_count setting:

    sysctl -w vm.max_map_count=262144

    Dashboard Visualiser - Kibana

    Kibana is a visualisation tool forming part of the Elastic (ELK) Stack for creating dashboards by querying data from ElasticSearch.

    Message Bus - Kafka

    Kafka is a stream processing platform which groups like-messages together, such that the number of sequential writes to disk can be increased, thus effectively increasing database speeds.

    hashtag
    Components

    The message-bus-kafka package consists of a few components, those being Kafka, Kafdrop, and Kminion.

    circle-exclamation

    The services consuming from and producing to kafka might crash if Kafka is unreachable, so this is something to bear in mind when making changes to or restarting the kafka service.

    hashtag
    Kafka

    The core stream-processing element of the message-bus-kafka package.

    hashtag
    Kafdrop

    Kafdrop is a web user-interface for viewing Kafka topics and browsing consumer-groups.

    hashtag
    Kminion

    A prometheus exporter for Kafka.

    Enable or disable TLS encryption.

    TLS_CHALLENGE

    http

    The challenge type to use for TLS certificate generation.

    WEB_ENTRY_POINT

    web

    The entry point for web traffic.

    REDIRECT_TO_HTTPS

    true

    Enable or disable automatic redirection to HTTPS.

    CERT_RESOLVER

    le

    The certificate resolver to use for obtaining TLS certificates.

    CA_SERVER

    https://acme-v02.api.letsencrypt.org/directoryarrow-up-right

    The URL of the ACME server for certificate generation.

    TLS

    true

    Interoperability Layer Openhim

    The interoperability layer that enables simpler data exchange between the different systems. It is also the security layer for the other systems.

    This component consists of two services:

    • Interoperability Layer - OpenHIMarrow-up-right for routing the events

    • Mongo DB for storing the transactions

    It provides an interface for:

    1. Checking the transactions logs

    2. Configuring the channels to route the events

    3. User authentication logs

    OpenHIM is based on two 3 main services, openhim-core as a backend, openhim-console as a frontend and mongo as a database.

    It is a mandatory component in the stack and the entry point for all incoming requests from the external systems.

    Client Registry - SanteMPI

    A patient matching and deduplicater for the platform

    This package consists of four services:

    • Postgres Main DB

    • Postgres Audit DB

    • SanteMPI Web UI -

    • SanteMPI API -

    Environment Variables

    Listed in this page are all environment variables needed to run the interoperability layer Openhim.

    Variable Name
    Type
    Relevance
    Required
    Default

    OPENHIM_CORE_MEDIATOR_HOSTNAME

    String

    Hostname of the Openhim mediator

    Yes

    localhost

    Dashboard Visualiser - Superset

    Superset is a visualisation tool meant for querying data from a SQL-type database.

    hashtag
    Version upgrade process (with rollback capability)

    By default if you simply update the image that the superset service uses to a later version, when the container is scheduled it will automatically run a database migration and the version of superset will be upgraded. The problem, however, is that if there is an issue with this newer version you cannot rollback the upgrade since the database migration that ran will cause the older version to throw an error and the container will no longer start. As such it is recommended to first create a postgres dump of the superset postgres database before attempting to upgrade superset's version.

    1. Exec into the postgres container as the root user (otherwise you will get write permission issues)

    1. Run the pg_dump command on the superset database. The database name is stored in SUPERSET_POSTGRESQL_DATABASE and defaults to superset

    1. Copy that dumpped sql script outside the container

    1. Update the superset version (either through a platform deploy or with a docker command on the server directly -- docker service update superset_dashboard-visualiser-superset --image apache/superset:tag)

    hashtag
    Rolling back upgrade

    In the event that something goes wrong you'll need to rollback the database changes too, i.e.: run the superset_backup.sql script we created before upgrading the superset version

    1. Copy the superset_backup.sql script into the container

    1. Exec into the postgres container

    1. Run the sql script (where -d superset is the database name stored in SUPERSET_POSTGRESQL_DATABASE)

    Local Development

    hashtag
    Accessing the Service

    Superset - http://127.0.0.1:8089/arrow-up-right

    hashtag
    Using the superset_config.py file

    The Superset package is configured to contain a superset_config.py file, which Superset looks for, and subsequently activates the contained feature flags. For more information on the allowed feature flags, visit .

    hashtag
    Importing & Exporting Assets

    The config importer written in JS will import the file superset-export.zip that exists in the folder <path to project packages>/dashboard-visualiser-superset/importer/config. The assets that will be imported to Superset are the following:

    • The link to the Clickhouse database

    • The dataset saved from Clickhouse DB

    • The dashboards

    If you made any changes to these objects please don't forget to export and save the file as superset-export.zip under the folder specified above. NB! It is not possible to export all these objects from the Superset UI, you can check the Postman collection: CARES DISI CDR -> Superset export assets and you will find two requests. To do the export, three steps are required:

    1. Run the Get Token Superset request to get the token (please make sure that you are using the correct request URL). An example of a response from Superset that will be displayed: { "access_token": "eyJ0eXAiOiJKV1...." }

    2. Copy the access token and put it into the second request Export superset assets in the Authorization section.

    Your changes should then be saved.

    Environment Variables

    Variable Name
    Description
    Default

    OPENFN_DATABASE_URL

    The URL of the PostgreSQL database

    postgresql://openfn:instant101@postgres-1:5432/lightning_dev

    OPENFN_DISABLE_DB_SSL

    Whether to disable SSL for the database connection

    true

    OPENFN_IS_RESETTABLE_DEMO

    Whether the application is running in resettable demo mode

    true

    Reverse Proxy Traefik

    Reverse proxy for secure traefik configurations.

    hashtag
    Reverse Proxy Traefik

    hashtag
    Reverse Proxy Traefik

    The package is an alternative reverse proxy Nginx, this reverse proxy exposes packages using both subdomains and subdirectories to host the following services:

    Package
    Hosted

    Please ensure that the ENV "DOMAIN_NAME_HOST_TRAEFIK" is set, in this documentation we will be using the placeholder "domain" for its value

    hashtag
    Subdomain-Based Reverse Proxy

    The following packages do not support subdomains and require the use of domain/subdomain to access over the reverse proxy

    hashtag
    Superset

    Set the following environment variable in the package-metadata.json in the "./dashboard-visualiser-superset" directory

    hashtag
    Jempi

    Set the following environment variables in the package-metadata.json in the "./client-registry-jempi" directory

    hashtag
    Santempi

    Set the following environment variables in the package-metadata.json in the "./client-registry-santempi" directory

    hashtag
    Enabling Kibana

    Set the following environment variables in the package-metadata.json in the "./dashboard-visualiser-kibana" directory

    hashtag
    Subdirectory

    hashtag
    Enabling Minio

    Set the following environment variables in the package-metadata.json in the "monitoring" directory

    hashtag
    MinIO Configuration

    The MinIO server is configured to run with the following port settings:

    • API Port: 9090

    • Console Port: 9001

    Ensure that your Traefik configuration reflects these ports to properly route traffic to the MinIO services. The API can be accessed at https://<domain>/minio and the Console at https://<domain>/minio-console.

    Update your Traefik labels in the docker-compose.yml to match these settings:

    hashtag
    Enabling Grafana

    Set the following environment variables in the package-metadata.json in the "monitoring" directory

    hashtag
    JS Report

    Set the following environment variables in the package-metadata.json in the "dashboard-visualiser-jsreport" directory

    hashtag
    OpenHIM

    Set the following environment variables in the package-metadata.json in the "./interoperability-layer-openhim" directory

    Note: Only the Backend services are accessible through subdirectory paths, not the frontend

    Service logs
  • Rerun the transactions tasks

  • Reprocess mediator launching

  • OPENHIM_MEDIATOR_API_PORT

    Number

    Port of the Openhim mediator

    Yes

    8080

    OPENHIM_CORE_INSTANCES

    Number

    Number of openhim-core instances

    No

    1

    OPENHIM_CONSOLE_INSTANCES

    String

    Number of openhim-console instances

    No

    1

    OPENHIM_MONGO_URL

    String

    MongoDB connection string

    Yes

    mongodb://mongo-1:27017/openhim

    OPENHIM_MONGO_ATNAURL

    String

    ???????????

    Yes

    mongodb://mongo-1:27017/openhim

    OPENHIM_CPU_LIMIT

    Number

    CPU limit usage for openhim-core

    No

    0

    OPENHIM_CPU_RESERVE

    Number

    Reserverd CPU usage for openhim-core

    No

    0.05

    OPENHIM_MEMORY_LIMIT

    String

    RAM usage limit for openhim-core

    No

    3G

    OPENHIM_MEMORY_RESERVE

    String

    Reserved RAM for openhim-core

    No

    500M

    OPENHIM_CONSOLE_CPU_LIMIT

    Number

    CPU limit usage for openhim-console

    No

    0

    OPENHIM_CONSOLE_CPU_RESERVE

    Number

    Reserverd CPU usage for openhim-console

    No

    0.05

    OPENHIM_CONSOLE_MEMORY_LIMIT

    String

    RAM usage limit for openhim-console

    No

    2G

    OPENHIM_CONSOLE_MEMORY_RESERVE

    String

    Reserved RAM for openhim-console

    No

    500M

    OPENHIM_MONGO_CPU_LIMIT

    Number

    CPU limit usage for mongo

    No

    0

    OPENHIM_MONGO_CPU_RESERVE

    Number

    Reserverd CPU usage for mongo

    No

    0.05

    OPENHIM_MONGO_MEMORY_LIMIT

    String

    RAM usage limit for mongo

    No

    3G

    OPENHIM_MONGO_MEMORY_RESERVE

    String

    Reserved RAM for mongo

    No

    500M

    MONGO_SET_COUNT

    Number

    Number of instances of Mongo

    YES

    1

    OPENFN_LISTEN_ADDRESS

    The IP address to listen on

    0.0.0.0

    OPENFN_LOG_LEVEL

    The log level for the application

    debug

    OPENFN_ORIGINS

    The allowed origins for CORS

    http://localhost:4000

    OPENFN_PRIMARY_ENCRYPTION_KEY

    The primary encryption key

    KLu/IoZuaf+baDECd8wG4Z6auwNe6VAmwh9N8lWdJ1A=

    OPENFN_SECRET_KEY_BASE

    The secret key base

    jGDxZj2O+Qzegm5wcZ940RfWO4D6RyU8thNCr5BUpHNwa7UNV52M1/Sn+7RxiP+f

    OPENFN_WORKER_RUNS_PRIVATE_KEY

    The private key for worker runs

    LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRREVtR3drUW5pT0hqVCsKMnkyRHFvRUhyT3dLZFI2RW9RWG9DeDE4MytXZ3hNcGthTFZyOFViYVVVQWNISGgzUFp2Z2UwcEIzTWlCWWR5Kwp1ajM1am5uK2JIdk9OZGRldWxOUUdpczdrVFFHRU1nTSs0Njhldm5RS0h6R29DRUhabDlZV0s0MUd5SEZCZXppCnJiOGx2T1A1NEtSTS90aE5pVGtHaUIvTGFLMldLcTh0VmtoSHBvaFE3OGIyR21vNzNmcWtuSGZNWnc0ZE43d1MKdldOamZIN3QwSmhUdW9mTXludUxSWmdFYUhmTDlnbytzZ0thc0ZUTmVvdEZIQkYxQTJjUDJCakwzaUxad0hmdQozTzEwZzg0aGZlTzJqTWlsZlladHNDdmxDTE1EZWNrdFJGWFl6V0dWc25FcFNiOStjcWJWUXRvdEU4QklON09GClRmaEx2MG9uQWdNQkFBRUNnZ0VBV3dmZyt5RTBSVXBEYThiOVdqdzNKdUN4STE1NzFSbmliRUhKVTZzdzNyS0EKck9HM0w5WTI0cHhBdlVPSm5GMFFzbThrUVQ4RU1MU3B6RDdjdDVON2RZMngvaGY4TThhL0VSWXM4cFlYcXI5Vwpnbnh3NldGZ0R6elFHZ0RIaW0raXNudk5ucFdEbTRGVTRObG02d2g5MzVSZlA2KzVaSjJucEJpZjhFWDJLdE9rCklOSHRVbFcwNFlXeDEwS0pIWWhYNFlydXVjL3MraXBORzBCSDZEdlJaQzQxSWw0N1luaTg1OERaL0FaeVNZN1kKWTlTamNKQ0QvUHBENTlNQjlSanJDQjhweDBjWGlsVXBVZUJSYndGalVwbWZuVmhIa1hiYlM1U0hXWWM4K3pLRQp2ajFqSEpxc2UyR0hxK2lHL1V3NTZvcHNyM2x3dHBRUXpVcEJGblhMMFFLQmdRRDM5bkV3L1NNVGhCallSd1JGCkY2a2xOYmltU2RGOVozQlZleXhrT0dUeU5NSCtYckhsQjFpOXBRRHdtMit3V2RvcWg1ZFRFbEU5K1crZ0FhN0YKbXlWc2xPTW4wdnZ2cXY2Wkp5SDRtNTVKU0lWSzBzRjRQOTRMYkpNSStHUW5VNnRha3Y0V0FSMkpXaURabGxPdAp3R01EQWZqRVIrSEFZeUJDKzNDL25MNHF5d0tCZ1FESzk3NERtV0c4VDMzNHBiUFVEYnpDbG9oTlQ2UldxMXVwCmJSWng4ZGpzZU0vQ09kZnBUcmJuMnk5dVc3Q1pBNFVPQ2s4REcxZ3ZENVVDYlpEUVdMaUp5RzZGdG5OdGgvaU8KT1dJM0UyczZOS0VMMU1NVzh5QWZwNzV4Ung5cnNaQzI2UEtqQ0pWL2lTVjcyNlQ1ZTFzRG5sZUtBb0JFZnlDRgpvbEhhMmhybWxRS0JnUURHT1YyOWd1K1NmMng1SVRTWm8xT1ZxbitGZDhlZno1d3V5YnZ3Rm1Fa2V1YUdXZDh1CnJ4UFM3MkJ6K0Y1dUJUWngvMWtLa0w4Zm94TUlQN0FleW1zOWhUeWVybnkyMk9TVlBJSmN3dExqMUxTeDN3L0kKK0kyaVpsYVl1akVlZXpXbHY1S2R0cUNORjk3Zzh0ck1NTnMySVZKa1h1NXFwUk82V0ZXRzZGL2h4d0tCZ0hnNApHYUpFSFhIT204ekZTU2lYSW5FWGZKQmVWZmJIOUxqNzFrbVRlR3RJZTdhTlVHZnVxY1BYUGRiZUZGSHRsY2ZsCkx6dWwzS3V6VFExdEhGTnIyWkl5MTlQM1o1TSs4R2c5Y1FFeVRWYmlpV2xha2x0cmttRnRtQTI4bE0zVEZPWmkKUUNWMUZpZStjaWRVeC9qRnFma1F0c1VXQ2llSUxSazZOY1d0WGpXcEFvR0JBTGN6Y210VGlUUEFvWnk0MFV1QQpTOXpUd3RsamhmUWJEVTVjb21EcnlKcnFRU0VOdmQ2VW5HdW0zYVNnNk13dDc0NGxidDAyMC9mSGI0WTJkTGhMCmx4YWJ5b1dQUElRRUpLL1NNOGtURFEvYTRyME5tZzhuV3h5bGFLcHQ5WUhmZ2NYMkYzSzUrc0VSUGNFcVZlWFMKdWZkYXdYQVlFampZK3V2UHZ2YzU3RU1aCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K

    OPENFN_WORKER_SECRET

    The secret key for the worker

    secret_here

    POSTGRES_USER

    The username for the PostgreSQL database

    postgres

    POSTGRES_SERVICE

    The service name for the PostgreSQL database

    postgres-1

    POSTGRES_DATABASE

    The name of the PostgreSQL database

    postgres

    POSTGRES_PASSWORD

    The password for the PostgreSQL database

    instant101

    POSTGRES_PORT

    The port number for the PostgreSQL database

    5432

    OPENFN_POSTGRESQL_DB

    The name of the OpenFn PostgreSQL database

    lightning_dev

    OPENFN_POSTGRESQL_USERNAME

    The username for the OpenFn PostgreSQL database

    openfn

    OPENFN_POSTGRESQL_PASSWORD

    The password for the OpenFn PostgreSQL database

    instant101

    OPENFN_WORKER_LIGHTNING_PUBLIC_KEY

    The public key for the worker lightning

    LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF4SmhzSkVKNGpoNDAvdHN0ZzZxQgpCNnpzQ25VZWhLRUY2QXNkZk4vbG9NVEtaR2kxYS9GRzJsRkFIQng0ZHoyYjRIdEtRZHpJZ1dIY3ZybzkrWTU1Ci9teDd6alhYWHJwVFVCb3JPNUUwQmhESURQdU92SHI1MENoOHhxQWhCMlpmV0ZpdU5Sc2h4UVhzNHEyL0piemoKK2VDa1RQN1lUWWs1Qm9nZnkyaXRsaXF2TFZaSVI2YUlVTy9HOWhwcU85MzZwSngzekdjT0hUZThFcjFqWTN4Kwo3ZENZVTdxSHpNcDdpMFdZQkdoM3kvWUtQcklDbXJCVXpYcUxSUndSZFFObkQ5Z1l5OTRpMmNCMzd0enRkSVBPCklYM2p0b3pJcFgyR2JiQXI1UWl6QTNuSkxVUlYyTTFobGJKeEtVbS9mbkttMVVMYUxSUEFTRGV6aFUzNFM3OUsKSndJREFRQUIKLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg

    OPENFN_IMAGE

    The image name for OpenFn

    openfn/lightning:v2.9.5

    OPENFN_WORKER_IMAGE

    The image name for OpenFn worker

    openfn/ws-worker:latest

    OPENFN_KAFKA_TRIGGERS_ENABLED

    Whether Kafka triggers are enabled

    true

    OPENFN_API_KEY

    The API key for OpenFn

    apiKey

    OPENFN_ENDPOINT

    The endpoint for OpenFn

    http://localhost:4000

    OPENFN_DOCKER_WEB_CPUS

    The number of CPUs allocated to the web container

    2

    OPENFN_DOCKER_WEB_MEMORY

    The amount of memory allocated to the web container

    4G

    OPENFN_DOCKER_WORKER_CPUS

    The number of CPUs allocated to the worker container

    2

    OPENFN_DOCKER_WORKER_MEMORY

    The amount of memory allocated to the worker container

    4G

    FHIR_SERVER_BASE_URL

    The base URL for the FHIR server

    http://openhim-core:5001

    FHIR_SERVER_USERNAME

    The username for the FHIR server

    openfn_client

    FHIR_SERVER_PASSWORD

    The password for the FHIR server

    openfn_client_password

    SanteMPI - Web UIarrow-up-right
    SanteMPI - APIarrow-up-right
    The charts
    Run the second request Export superset assets . You can save the response into a file called superset-export.zip under the folder specified above.
    https://github.com/apache/superset/blob/master/RESOURCES/FEATURE_FLAGS.mdarrow-up-right

    Sub Directory (e.g. /jsreport)

    OpenHim

    Sub Domain (Frontend) Sub Directory (Backend) (e.g. openhim. and openhim./openhimcore)

    Superset

    Sub Domain (e.g. superset.)

    Jempi

    Sub Domain (e.g. jempi.)

    Santempi

    Sub Domain (e.g. santempi.)

    Kibana

    Sub Domain (e.g. kibana.)

    Minio

    Sub Directory (e.g. /minio)

    Grafana

    Sub Directory (e.g. /grafana)

    JSReport

    docker exec -u root -it superset_postgres-metastore-1.container-id-here bash
    pg_dump superset -c -U admin > superset_backup.sql
    docker cp superset_postgres-metastore-1.container-id-here:/superset_backup.sql /path/to/save/to/superset_backup.sql
    docker cp /path/to/save/to/superset_backup.sql superset_postgres-metastore-1.container-id-here:/superset_backup.sql 
    docker exec -it superset_postgres-metastore-1.container-id-here bash
    cat superset_backup.sql | psql -U admin -d superset
    "environmentVariables":
    {
    # Other Configurations
    ...
        "SUPERSET_TRAEFIK_SUBDOMAIN": "superset"
    }
    "environmentVariables":
    {
    # Other Configurations
    ...
        "REACT_APP_JEMPI_BASE_API_HOST": "jempi-api.domain",
        "REACT_APP_JEMPI_BASE_API_PORT": "443",
        "JEMPI_API_TRAEFIK_SUBDOMAIN": "jempi-api",
        "JEMPI_WEB_TRAEFIK_HOST_NAME": "jempi-web",
    }
    "environmentVariables":
    {
    # Other Configurations
    ...
        "SANTEDB_WWW_TRAEFIK_SUBDOMAIN": "santewww",
        "SANTEDB_MPI_TRAEFIK_SUBDOMAIN": "santempi"
    }
    
    "environmentVariables":
    {
    # Other Configurations
    ...
        "KIBANA_TRAEFIK_SUBDOMAIN": "kibana"
    }
    
    "environmentVariables":
    {
    # Other Configurations
    ...
        "MINIO_BROWSER_REDIRECT_URL": "https://domain/minio-console/"
    }
    # API Configuration
    - traefik.http.services.minio.loadbalancer.server.port=9090
    # Console Configuration
    - traefik.http.services.minio-console.loadbalancer.server.port=9001
    
    "environmentVariables":
    {
    # Other Configurations
    ...
        "KC_GRAFANA_ROOT_URL": "%(protocol)s://%(domain)s/grafana/",
        "GF_SERVER_DOMAIN": "domain",
        "GF_SERVER_SERVE_FROM_SUB_PATH": "true",
    }
    
    "environmentVariables":
    {
    # Other Configurations
    ...
        "JS_REPORT_PATH_PREFIX": "/jsreport"
    }
    "environmentVariables":
    {
    # Other Configurations
    ...
        "OPENHIM_SUBDOMAIN": "domain",
        "OPENHIM_CONSOLE_BASE_URL": "http://domain"
        "OPENHIM_CORE_MEDIATOR_HOSTNAME": "domain/openhimcomms",
        "OPENHIM_MEDIATOR_API_PORT": "443"
    }
    creating a profile