Only this pageAll pages
Powered by GitBook
1 of 71

OpenHIM Platform

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Local Development

A job scheduling tool.

Ofelia - Job Scheduler

Job Docs

The platform uses image: mcuadros/ofelia:v0.3.6 which has the following limitations:

  • Ofelia does not support config.ini files when run in docker mode (which enables scheduling jobs with docker labels) thus we need to always use the config.ini file for creating jobs.

  • Ofelia does not support attaching to a running instance of a service.

  • Ofelia does not support job-run (which allows you to launch a job with a specified image name) labels on non-ofelia services (ie. you may not specify a job of type job-run within the nginx package as ofelia will not pick it up)

  • Ofelia only initializes jobs when it stands up and does not listen for new containers with new labels to update it's schedules, thus Ofelia needs to be re-up'd every time a change is made to a job that is configured on another service's label.

Example of a job config

An example of job config in the file config.example.ini existing in the folder <path to project packages>/job-scheduler-ofelia/.

[job-run "renew-certs"]
schedule = @every 1440h ;60 days
image = jembi/swarm-nginx-renewal:v1.0.0
volume = renew-certbot-conf:/instant
volume = /var/run/docker.sock:/var/run/docker.sock:ro
environment = RENEWAL_EMAIL=${RENEWAL_EMAIL}
environment = STAGING=${STAGING}
environment = DOMAIN_NAME=${DOMAIN_NAME}
environment = SUBDOMAINS=${SUBDOMAINS}
environment = REVERSE_PROXY_STACK_NAME=${REVERSE_PROXY_STACK_NAME}
delete = true

You can specify multiple jobs in a single file.

Environment Variables

Listed in this page are all environment variables needed to run hapi-fhir package.

Variable Name
Type
Revelance
Required
Default

REPMGR_PRIMARY_HOST

String

Service name of the primary replication manager host (PostgreSQL)

No

postgres-1

REPMGR_PARTNER_NODES

String

Service names of the replicas of PostgreSQL

Yes

postgres-1

POSTGRES_REPLICA_SET

String

PostgreSQL replica set (host and port of the replicas)

Yes

postgres-1:5432

HAPI_FHIR_CPU_LIMIT

Number

CPU limit usage for hapi-fhir service

No

0 (unlimited)

HAPI_FHIR_CPU_RESERVE

Number

Reserved CPU usage for hapi-fhir service

No

0.05

HAPI_FHIR_MEMORY_LIMIT

String

RAM limit usage for hapi-fhir service

No

3G

HAPI_FHIR_MEMORY_RESERVE

String

Reserved RAM usage for hapi-fhir service

No

500M

HF_POSTGRES_CPU_LIMIT

Number

CPU limit usage for postgreSQL service

No

0 (unlimited)

HF_POSTGRES_CPU_RESERVE

Number

Reserved CPU usage for postgreSQL service

No

0.05

HF_POSTGRES_MEMORY_LIMIT

String

RAM limit usage for postgreSQL service

No

3G

HF_POSTGRES_MEMORY_RESERVE

String

Reserved RAM usage for hapi-fhir service

No

500M

HAPI_FHIR_INSTANCES

Number

Number of hapi-fhir service replicas

No

1

HF_POSTGRESQL_USERNAME

String

Hapi-fhir PostgreSQL username

Yes

admin

HF_POSTGRESQL_PASSWORD

String

Hapi-fhir PostgreSQL password

Yes

instant101

HF_POSTGRESQL_DATABASE

String

Hapi-fhir PostgreSQL database

No

hapi

REPMGR_PASSWORD

Strign

hapi-fhir PostgreSQL Replication Manager username

Yes

Central Data Repository with Data Warehousing

Note: This recipe is in a pre-release alpha stage. It's usable but do so at your own risk.

This recipe sets up an HIE that does the following:

  • Accept FHIR bundles submitted securely through an IOL (OpenHIM)

  • Stores Clinical FHIR data to a FHIR store (HAPI FHIR)

  • Stores Patient Demographic data to an MPI (JeMPI)

  • Pushes FHIR resources to Kafka for the reporting pipeline (and other systems) to use

  • Pulls FHIR data out of Kafka and maps it to flattened tables in the Data Warehouse (Clickhouse)

  • Allows for the Data Warehouse data to be visualised via a BI tool (Apache Superset)

To launch this package in dev mode copy and paste this into your terminal in a new folder (ensure you have the instant CLI installed):

wget https://github.com/jembi/platform/releases/latest/download/cdr-dw.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p cdr-dw --dev

Services

When deployed in --dev mode the location of the UIs will be as follows:

Service
URL
Auth

OpenHIM

Test SSO user: u: test p: dev_password_only

JeMPI

Test SSO user: u: test p: dev_password_only

Superset

Test SSO user: u: test p: dev_password_only

Grafana

Test SSO user: u: test p: dev_password_only

Keycloak

u: admin p: dev_password_only

Extra UIs only exposed in --dev mode:

Service
URL
Auth

Kafdrop

none

HAPI FHIR

none

Example use

Use the following example postman collection to see interaction you cna have with the system and see how the system reacts.

http://localhost:9000/
http://localhost:3033/
http://localhost:8089/
http://localhost:3000/
http://localhost:9088/admin/master/console/#/platform-realm
http://localhost:9013/
http://localhost:3447/

Getting Started

What you need to start using OpenHIM Platform.

Prerequisites

Before getting started with OpenHIM Platform you will need to have Instant OpenHIE tool installed and functional. .

  • If you're a Windows user and are using WSL2 to be able to run the platform: you should limit the amount of RAM/CPU that will be used by WSL, for more details please check the following link: Limiting memory usage in WSL2.

Quick Start

Ensure Docker Swarm in initialised:

docker swarm init

Download the latest OpenHIM Platform config file which configures Instant OpenHIE v2 to use OpenHIM Platform packages:

wget -qO config.yaml https://github.com/jembi/platform/releases/latest/download/config.yaml

Download the latest environment variable file, which sets configuration options for OpenHIM Platform packages:

wget -qO .env.local https://github.com/jembi/platform/releases/latest/download/.env.local

Launch some OpenHIM Platform packages, e.g.

instant package init --name interoperability-layer-openhim --name message-bus-kafka --env-file .env.local --dev

This launches the OpenHIM and Kafka packages in dev mode (which exposes service ports for development purposes) using the config supplied in the env var file.

To destroy the setup packages and delete their data run:

instant package destroy --name interoperability-layer-openhim --name message-bus-kafka --env-file .env.local --dev

Next, you might want to browse the recipes available in OpenHIM Platform. Each recipe bundles a set of packages and configuration to setup an HIE for a particular purpose.

For example, this command allows the most comprehensive recipe to be deployed with one command:

wget https://github.com/jembi/platform/releases/latest/download/cdr-dw.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p cdr-dw --dev

Alternatively you can also browse the individual set of packages that OpenHIM Platform offers. Each package's documentation lists the environment variables used to configure them.

For more information on how to start stop and destroy packages using the command line, see the Instant OpenHIE 2 CLI docs.

Please join us on Discord for support or to chat about new features or ideas.

Recipes

Pre-defined recipes for common use cases

OpenHIM platform comes bundled with a set of generic packages that can be deployed and configured to support a number of different use cases. To help users of OpenHIM Platform get started with something they can make use of immediately, a number of default OpenHIM Platform recipes are provided. These help you get started with everything you need setup and configured for a particular use case.

These recipes combine and configure multiple packages together so that a functional HIE is stood up that is pre-configured to support a particular use case.

We currently support the following default recipes:

Central Data Repository with Data Warehouse

A FHIR-based Shared Health record linked to an MPI for linking and matching patient demographics and a default reporting pipeline to transform and visualise FHIR data.

Central Data Repository

A FHIR-based Shared Health record linked to an MPI for linking and matching patient demographics. No reporting is include but all FHIR data is pushed to Kafka for external system to use.

Master Patient Index

A master patient index setup using JeMPI. it also includes OpenHIM as the API gateway providing security, a mapping mediator to allow FHIR-based communication with JeMPI and Keycloak to support user management.

Master Patient Index

Note: This recipe is in a pre-release alpha stage. It's usable but do so at your own risk.

This recipe sets up an HIE that deploys JeMPI behind the OpenHIM with a mapping mediator configured to allow for FHIR-based communication with JeMPI. It also deploys Keycloak for user management and authentication.

To launch this package in dev mode copy and paste this into your terminal in a new folder (ensure you have the instant CLI installed):

wget https://github.com/jembi/platform/releases/latest/download/mpi.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p mpi --dev

OpenHIM Platform

What is the OpenHIM Platform and what can you use it for?

OpenHIM platform is an easy way to set up, manage and operate a Health Information Exchange (HIE). Specifically, it is the following:

  • A toolbox of open-source tools, grouped into packages, that are used within an HIE.

  • The glue that ties these tools together. These are often in the form of OpenHIM mediators which are just microservices that talk to OpenHIM.

  • A CLI tool to deploy and manage these packages.

The Problem

We at Jembi want to stop rebuilding solutions from near scratch each time we need an HIE implementation. It would be beneficial to us and others doing the same work to focus more on the unique needs of a country rather than the intricacies of a production deployment of an HIE.

Operating production-grade HIE systems is hard, because of these issues:

  • Need to support up to national scale

  • An always-present need for high level of security

  • Difficulty of deploying complex systems that have many components

  • Considerations for high availability/fault tolerance

  • Setting up monitoring of all services within an HIE

  • Common HIE services require very specific knowledge, i.e.:

    • Patient matching

    • Efficient reporting

    • Data standards

The Solution

OpenHIM Platform provides an opinionated way to deploy, secure and scale highly-available services for an HIE environment. It provides a set of services to solve common HIE challenges:

  • Patient matching

  • FHIR support

  • Reporting services

  • Extensible for country needs

  • Deploying/Operating/Managing HIE services

OpenHIM Platform is powered by the Instant OpenHIE deployment tool.

Central Data repository (no reporting)

Note: This recipe is in a pre-release alpha stage. It's usable but do so at your own risk.

This recipe sets up an HIE that does the following:

  • Accept FHIR bundles submitted securely through an IOL (OpenHIM)

  • Stores Clinical FHIR data to a FHIR store (HAPI FHIR)

  • Stores Patient Demographic data to an MPI (JeMPI)

  • Pushes FHIR resources to Kafka for other external systems to use

To launch this package in dev mode copy and paste this into your terminal in a new folder (ensure you have the instant CLI installed):

wget https://github.com/jembi/platform/releases/latest/download/cdr.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p cdr --dev
Getting Started

Packages

The OpenHIM Platform includes a number of base packages which are useful for supporting Health Information Exchanges Workflows. Each section below describes the details of these packages.

Package can be stood up individually using the instant package init -n <package_name> command, or they can be included in your own recipes. This can be accomplished by that includes the necessary packages and any custom configuration packages.

Monitoring

A package for monitoring the platform services

The monitoring package sets up services to monitor the entire deployed stack. This includes the state of the servers involved in the docker swarm, the docker containers themselves and particular applications such as Kafka. It also captures the logs from the various services.

This monitoring package uses:

  • Grafana: for dashboards

  • Prometheus: for recording metrics

  • Cadvisor: for reading docker container metrics

  • Loki: for storing logs

  • Node Exporter: for monitoring host machine metrics like CPU, memory etc

To use the monitoring services, include the monitoring package id to your list of package ids when standing up the platform.

Adding application specific metrics

The monitoring service utilises service discovery to discover new metric endpoints to scrape.

To use custom metrics for an application, first configure that application to provide a Prometheus compatible metrics endpoint. Then, let the monitoring service know about it by configuring specific docker service labels that tell the monitoring service to add a new endpoint to scrape. E.g. see lines 8-9:

  kafka-minion:
    image: quay.io/cloudhut/kminion:master
    hostname: kafka-minion
    environment:
      KAFKA_BROKERS: kafka:9092
    deploy:
      labels:
        - prometheus-job-service=kafka
        - prometheus-address=kafka-minion:8080

prometheus-job-service lets Prometheus know to enable monitoring for this container and prometheus-address gives the endpoint that Prometheus can access the metrics on. By default this is assumed to be at the path /metrics by Prometheus.

By using the prometheus-job-service label prometheus will only create a single target for your application even if it is replicated via service config in docker swarm. If you would like to monitor each replica separately (i.e. if metrics are only captured for that replica and not shared to some central location in the application cluster) you can instead used the prometheus-job-task label and Prometheus will create a target for each replica.

A full list od supported labels are listed below:

  • prometheus-job-service - indicates this service should be monitored

  • prometheus-job-task - indicates each task in the replicated service should be monitored separately

  • prometheus-address - the service address Prometheus can scrape metrics from, can only be used with prometheus-job-service

  • prometheus-scheme - the scheme to use when scaping a task or service (e.g. http or https), defaults to http

  • prometheus-metrics-path - the path to the metrics endpoint on the target (defaults to /metrics)

  • prometheus-port - the port of the metrics endpoint. Only usable with prometheus-job-task, defaults to all exposed ports for the container if no label is present

All services must also be on the prometheus_public network to be able to be seen by Prometheus for metrics scraping.

Adding additional dashboards

To add additional dashboards simply use docker configs to add new Grafana dashboard json files into this directory in the Grafana container: /etc/grafana/provisioning/dashboards/

That directory will be scanned periodically and new dashboards will automatically be added to Grafana.

Grafana dashboard json file may be exported directly from the Grafana when saving dashboards or you may lookup the many existing dashboard in the Grafana marketplace.

Interoperability Layer Openhim

The interoperability layer that enables simpler data exchange between the different systems. It is also the security layer for the other systems.

This component consists of two services:

  • Interoperability Layer - OpenHIM for routing the events

  • Mongo DB for storing the transactions

It provides an interface for:

  1. Checking the transactions logs

  2. Configuring the channels to route the events

  3. User authentication logs

  4. Service logs

  5. Rerun the transactions tasks

  6. Reprocess mediator launching

OpenHIM is based on two 3 main services, openhim-core as a backend, openhim-console as a frontend and mongo as a database.

It is a mandatory component in the stack and the entry point for all incoming requests from the external systems.

Local Development

A Kafka consumer that maps FHIR resources to a flattened data structure

Kafka-mapper-consumer

A Kafka processor that will consume messages from Kafka topics. This messages will be mapped according to the mapping defined in the file called fhir-mapping.json.

This flattened data will be then sent to Clickhouse DB to be stored.

Each topic has its own table mapping, plugin and filter and one topic may be mapped in different ways.

An example of fhir-mapping.json can be found in the package.

Each new message with new ID will be inserted as a new row in the table defined in the mapping. An update of the message will result on update in Clickhouse DB accordingly. Link to GitHub repo: https://github.com/jembi/kafka-mapper-consumer.

Environment Variables

Listed in this page are all environment variables needed to run the interoperability layer Openhim.

Variable Name
Type
Relevance
Required
Default

OPENHIM_CORE_MEDIATOR_HOSTNAME

String

Hostname of the Openhim mediator

Yes

localhost

OPENHIM_MEDIATOR_API_PORT

Number

Port of the Openhim mediator

Yes

8080

OPENHIM_CORE_INSTANCES

Number

Number of openhim-core instances

No

1

OPENHIM_CONSOLE_INSTANCES

String

Number of openhim-console instances

No

1

OPENHIM_MONGO_URL

String

MongoDB connection string

Yes

mongodb://mongo-1:27017/openhim

OPENHIM_MONGO_ATNAURL

String

???????????

Yes

mongodb://mongo-1:27017/openhim

OPENHIM_CPU_LIMIT

Number

CPU limit usage for openhim-core

No

0

OPENHIM_CPU_RESERVE

Number

Reserverd CPU usage for openhim-core

No

0.05

OPENHIM_MEMORY_LIMIT

String

RAM usage limit for openhim-core

No

3G

OPENHIM_MEMORY_RESERVE

String

Reserved RAM for openhim-core

No

500M

OPENHIM_CONSOLE_CPU_LIMIT

Number

CPU limit usage for openhim-console

No

0

OPENHIM_CONSOLE_CPU_RESERVE

Number

Reserverd CPU usage for openhim-console

No

0.05

OPENHIM_CONSOLE_MEMORY_LIMIT

String

RAM usage limit for openhim-console

No

2G

OPENHIM_CONSOLE_MEMORY_RESERVE

String

Reserved RAM for openhim-console

No

500M

OPENHIM_MONGO_CPU_LIMIT

Number

CPU limit usage for mongo

No

0

OPENHIM_MONGO_CPU_RESERVE

Number

Reserverd CPU usage for mongo

No

0.05

OPENHIM_MONGO_MEMORY_LIMIT

String

RAM usage limit for mongo

No

3G

OPENHIM_MONGO_MEMORY_RESERVE

String

Reserved RAM for mongo

No

500M

MONGO_SET_COUNT

Number

Number of instances of Mongo

YES

1

Local Development

The Interoperability Layer is the base of the Platform architecture.

Accessing the services

OpenHIM

  • Console:

  • Username: root@openhim.org

  • Password: instant101

Testing the Interoperability Component

As part of the Interoperability Layer setup we also do some initial config import for connecting the services together.

  • OpenHIM: Import a channel configuration that routes requests to the Data Store - HAPI FHIR service

This config importer will import channels and configuration according to the file openhim-import.json in the folder <path to project packages>/interoperability-layer-openhim/importer/volume.

http://127.0.0.1:9000

Environment Variables

Listed in this page are all environment variables needed to run Kafka mapper consumer.

Variable Name
Type
Relevance
Required
Default

KAFKA_HOST

String

Kafka hostname

No

kafka

KAFKA_PORT

Number

Kafka port

No

9092

CLICKHOUSE_HOST

String

Clickhouse hostname

No

analytics-datastore-clickhouse

CLICKHOUSE_PORT

String

Clickhouse port

No

8123

Local Development

Generic Logstash pipeline for ELK stack.

Adding pipelines and configs

To add Logstash config files, you can add files into the <path to project packages>/data-mapper-logstash/pipeline.

Developing the Logstash configs locally

When seeking to make changes to the Logstash configs without having to repeatedly start and stop the service, one can set the LOGSTASH_DEV_MOUNT env var in your .env file to true to attach the service's config files to those on your local machine.

Cluster

When attaching Logstash to an Elasticsearch cluster ensure you use the ES_HOSTS environment variable. eg. ES_HOSTS="analytics-datastore-elastic-search-1:9200","analytics-datastore-elastic-search-2:9200","analytics-datastore-elastic-search-3:9200" and reference it in your logstash configs eg. hosts => [$ES_HOSTS]

Notes

  • With LOGSTASH_DEV_MOUNT=true, you have to set the LOGSTASH_PACKAGE_PATH variable with the absolute path to package containing your Logstash config files, i.e., LOGSTASH_PACKAGE_PATH=/home/user/Documents/Projects/platform/data-mapper-logstash.

  • WARNING: do not edit the pipeline files from within the Logstash container, or the group ID and user ID will change, and subsequently will result in file permission errors on your local file system.

Data Mapper Logstash

Generic Logstash pipeline for ELK stack.

Logstash provides a data transformation pipeline for analytics data. In the platform it is responsible for transforming FHIR messages into a flattened object that can be inserted into Elasticsearch.

Input

Logstash allows for different types of input to read the data: Kafka, HTTP ports, files, etc.

Filters

With a set of filters and plugins, the data can be transformed, filtered, and conditioned.

This allows the creation of a structured and flattened object out of many nested and long resources.

Accessing the different fields will be much easier and we will get rid of the unused data.

Output

To save the data, Logstash provides a set of outputs such as: Elasticsearch, S3, files, etc.

Kafka Mapper Consumer

A Kafka consumer that maps FHIR resources to a flattened data structure.

Job Scheduler Ofelia

A job scheduling tool.

Environment Variables

Listed in this page are all environment variables needed to run Monitoring package.

Variable Name
Type
Relevance
Required
Default

GF_SECURITY_ADMIN_USER

String

Username of Grafana service

No

admin

GF_SECURITY_ADMIN_PASSWORD

String

Password of Grafana service

No

dev_password_only

Analytics Datastore - Clickhouse

Clickhouse is a SQL datastore.

Local Development

Clickhouse is a SQL datastore

Launching

Launching this package executes the following two steps:

  • Running Clickhouse service

  • Running config importer to run the initial SQL script

Initializing ClickHouse

The config importer will be launched to run a NodeJS script after ClickHouse has started.

It will run SQL queries to initialize the tables and the schema, and can also include initial seed data if required.

The config importer looks for two files clickhouseTables.js and clickhouseConfig.js found in <path to project packages>/analytics-datastore-clickhouse/importer/config.

For specific implementation, this folder can be overridden.

Environment Variables

Listed in this page are all environment variables needed to run Logstash.

Variable Name
Type
Relevance
Required
Default

LOGSTASH_INSTANCES

Number

Number of service replicas

No

1

LOGSTASH_DEV_MOUNT

Boolean

DEV mount mode enabling flag

No

false

LOGSTASH_PACKAGE_PATH

String

Logstash package absolute path

yes if LOGSTASH_DEV_MOUNT is true

LS_JAVA_OPTS

String

JVM heap size, it should be no less than 4GB and no more than 8GB (maximum of 50-75% of total RAM)

No

-Xmx2g -Xms2g

ES_ELASTIC

String

ElasticSearch Logstash user password

Yes

dev_password_only

ES_HOSTS

String

Elasticsearch connection string

Yes

analytics-datastore-elastic-search:9200

KIBANA_SSL

Boolean

SSL protocol requirement

No

True

LOGSTASH_MEMORY_LIMIT

String

RAM usage limit

No

3G

LOGSTASH_MEMORY_RESERVE

String

Reserved RAM

No

500M

Environment Variables

Listed in this page are all environment variables needed to run Clickhouse.

Variable Name
Type
Relevance
Required
Default

CLICKHOUSE_HOST

String

The service name (host) of Clickhouse

No

analytics-datastore-clickhouse

CLICKHOUSE_PORT

Number

The port that the service of Clickhouse is exposed to

No

8123

Client Registry - SanteMPI

A patient matching and deduplicater for the platform

This package consists of four services:

  • Postgres Main DB

  • Postgres Audit DB

  • SanteMPI Web UI -

  • SanteMPI API -

Local Development

Accessing the Service

Jsreport -

Scripts/Templates Development

When seeking to make changes to the Jsreport scripts/templates without having to repeatedly start and stop the service, one can set the JS_REPORT_DEV_MOUNT environment variable in your .env file to true to attach the service's content files to those on your local machine.

  • You have to run the set-permissions.sh script before and after launching Jsreport when JS_REPORT_DEV_MOUNT=true.

  • REMEMBER TO EXPORT THE JSREXPORT FILE WHEN YOU'RE DONE EDITING THE SCRIPTS. More info is available at

  • With JS_REPORT_DEV_MOUNT=true, you have to set the JS_REPORT_PACKAGE_PATH variable with the absolute path to the Jsreport package on your local machine, i.e., JS_REPORT_PACKAGE_PATH=/home/user/Documents/Projects/platform/dashboard-visualiser-jsreport

  • Remember to shut down Jsreport before changing git branches if JS_REPORT_DEV_MOUNT=true, otherwise, the dev mount will persist the Jsreport scripts/templates across your branches.

Export & Import

After editing the templates in Jsreport, you will need to save these changes, it is advised to export a file containing all the changes named export.jsrexport and put it into the folder <path to project packages>/dashboard-visualiser-jsreport/importer.

The config importer of Jsreport will import the export.jsrexport and then all the templates, assets, and scripts will be loaded in Jsreport.

SanteMPI - Web UI
SanteMPI - API
http://127.0.0.1:5488/
https://jsreport.net/learn/import-export

Environment Variables

Listed in this page are all environment variables needed to run Kibana.

Variable Name
Type
Relevance
Required
Default

SANTEMPI_INSTANCES

Number

Number of service replicas

No

1

SANTEMPI_MAIN_CONNECTION_STRING

String

Connection string to SanteMPI

No

Check below table

SANTEMPI_AUDIT_CONNECTION_STRING

String

Audit connection string to SanteMPI

No

Check below table

SANTEMPI_POSTGRESQL_PASSWORD

String

SanteMPI postgreSQL password

No

SanteDB123

SANTEMPI_POSTGRESQL_USERNAME

String

SanteMPI postgreSQL username

No

santempi

SANTEMPI_REPMGR_PRIMARY_HOST

String

SanteMPI postgreSQL replicas manager primary host

No

santempi-psql-1

SANTEMPI_REPMGR_PARTNER_NODES

String

SanteMPI postgreSQL replicas manager nodes hosts

Yes

santempi-psql-1,santempi-psql-2,santempi-psql-

Note

The environment variable SANTEMPI_REPMGR_PARTNER_NODES will differ from cluster and single mode.

Default value for SANTEMPI_MAIN_CONNECTION_STRING:

server=santempi-psql-1;port=5432; database=santedb; user id=santedb; password=SanteDB123; pooling=true; MinPoolSize=5; MaxPoolSize=15; Timeout=60;

Default value for SANTEMPI_AUDIT_CONNECTION_STRING:

server=santempi-psql-1;port=5432; database=auditdb; user id=santedb; password=SanteDB123; pooling=true; MinPoolSize=5; MaxPoolSize=15; Timeout=60;

Local Development

Elasticsearch is the datastore for the Elastic (ELK) Stack

Launching

Launching this package follows different steps:

  • [Cluster mode] Creating certificates and configuring the nodes

  • Running Elasticsearch

  • Setting Elasticsearch passwords

  • Importing Elasticsearch index

Importing

To initialize the index mapping in Elasticsearch, a helper container is launched to import a config file to Elasticsearch. The config importer looks for a field named fhir-enrich-report.json in <path to project packages>/analytics-datastore-elastic-search/importer.

The file fhir-enrich-report.json will contain the mapping of the index fhir-enrich-reports.

Elasticsearch will create a dynamic mapping for the incoming data if we don't specify one, this dynamic mapping may cause issues when we start sending the data as it doesn't necessarily conform 100% to the data types that we're expecting when querying the data out of Elasticsearch again.

Therefore, the mapping should be initialized in Elasticsearch using the config importer.

The file fhir-enrich-report.json is just an example, the name and the mapping can be overridden.

Running in Dev Mode

When running in DEV mode, Elasticsearch is reachable at:

http://127.0.0.1:9201/

Elasticsearch Backups

For detailed steps about creating backups see: Snapshot filesystem repository docs.

Elasticsearch offers the functionality to save a backup in different ways, for further understanding, you can use this link: Register a snapshot repository docs.

Elasticsearch Restore

To see how to restore snapshots in Elasticsearch: Snapshot Restore docs.

Dashboard Visualiser - Superset

Superset is a visualisation tool meant for querying data from a SQL-type database.

Version upgrade process (with rollback capability)

By default if you simply update the image that the superset service uses to a later version, when the container is scheduled it will automatically run a database migration and the version of superset will be upgraded. The problem, however, is that if there is an issue with this newer version you cannot rollback the upgrade since the database migration that ran will cause the older version to throw an error and the container will no longer start. As such it is recommended to first create a postgres dump of the superset postgres database before attempting to upgrade superset's version.

  1. Exec into the postgres container as the root user (otherwise you will get write permission issues)

docker exec -u root -it superset_postgres-metastore-1.container-id-here bash
  1. Run the pg_dump command on the superset database. The database name is stored in SUPERSET_POSTGRESQL_DATABASE and defaults to superset

pg_dump superset -c -U admin > superset_backup.sql
  1. Copy that dumpped sql script outside the container

docker cp superset_postgres-metastore-1.container-id-here:/superset_backup.sql /path/to/save/to/superset_backup.sql
  1. Update the superset version (either through a platform deploy or with a docker command on the server directly -- docker service update superset_dashboard-visualiser-superset --image apache/superset:tag)

Rolling back upgrade

In the event that something goes wrong you'll need to rollback the database changes too, i.e.: run the superset_backup.sql script we created before upgrading the superset version

  1. Copy the superset_backup.sql script into the container

docker cp /path/to/save/to/superset_backup.sql superset_postgres-metastore-1.container-id-here:/superset_backup.sql 
  1. Exec into the postgres container

docker exec -it superset_postgres-metastore-1.container-id-here bash
  1. Run the sql script (where -d superset is the database name stored in SUPERSET_POSTGRESQL_DATABASE)

cat superset_backup.sql | psql -U admin -d superset

Analytics Datastore - Elasticsearch

Elasticsearch is the datastore for the Elastic (ELK) Stack.

Dashboard Visualiser - Jsreport

Jsreport is a visualisation tool configured to query data from Elasticsearch.

Running in Clustered Mode

Pre-Deploy Configuration

If running in clustered mode, take note that each machine has to have the following vm.max_map_count setting:

sysctl -w vm.max_map_count=262144

Environment Variables

Listed in this page are all environment variables needed to run Jsreport.

Variable Name
Type
Relevance
Required
Default

Environment Variables

Listed in this page are all environment variables needed to run and initialize Elasticsearch.

Variable Name
Type
Relevance
Required
Default

JS_REPORT_LICENSE_KEY

String

Service license key

Yes

JS_REPORT

String

Jsreport service password

No

dev_password_only

JS_REPORT_USERNAME

String

Jsreport service username

No

admin

JS_REPORT_SECRET

String

Secret password for the authentication of a cookie session related to the extension used in Jsreport

No

dev_secret_only

ES_HOSTS

String

Elasticsearch connection string

No

analytics-datastore-elastic-search:9200

ES_PASSWORD

String

Elasticsearch password (for request authentication)

No

dev_password_only

ES_USERNAME

String

Elasticsearch username (for request authentication

No

elastic

JS_REPORT_INSTANCES

Number

Number of service replicas

No

1

JS_REPORT_SSL

Boolean

SSL protocol requirement

No

false

JS_REPORT_CONFIG_FILE

String

Path to the service import file

No

export.jsrexport

JS_REPORT_DEV_MOUNT

Boolean

Dev mount mode enabling flag

No

false

JS_REPORT_PACKAGE_PATH

String

Local path to package

Yes if JS_REPORT_DEV_MOUNT is set to true

JS_REPORT_CPU_LIMIT

Number

CPU usage limit

No

0

JS_REPORT_MEMORY_LIMIT

String

RAM usage limit

No

3G

JS_REPORT_CPU_RESERVE

Number

Reserved CPU

No

0.05

JS_REPORT_MEMORY_RESERVE

String

Reserved RAM

No

500M

ES_ELASTIC

String

Elasticsearch super-user password

Yes

dev_password_only

ES_KIBANA_SYSTEM

String

The password for the user Kibana used to connect and communicate with Elasticsearch

Yes

dev_password_only

ES_LOGSTASH_SYSTEM

String

The password for the user Logstash used to map and transform the data before storing it in Elasticsearch

Yes

dev_password_only

ES_BEATS_SYSTEM

String

The password for the user the Beats use when storing monitoring information in Elasticsearch

Yes

dev_password_only

ES_REMOTE_MONITORING_USER

String

The password for the user Metricbeat used when collecting and storing monitoring information in Elasticsearch. It has the remote_monitoring_agent and remote_monitoring_collector built-in roles

Yes

dev_password_only

ES_APM_SYSTEM

String

The password for the user of the APM server used when storing monitoring information in Elasticsearch

Yes

dev_password_only

ES_LEADER_NODE

String

Specify the leader service name (the service name in case single mode and the leader service name in case cluster mode)

This is used for the config importer. Specifying the service name to initialize the mapping inside Elasticsearch

Yes

analytics-datastore-elastic-search

ES_HEAP_SIZE

String

The heap size is the amount of RAM allocated to the Java Virtual Machine of a node in Elasticsearch

It should be set -Xms and -Xmx to the same value (50% of the total available RAM to a maximum of 31GB)

No

-Xms2048m -Xmx2048m

ES_SSL

Boolean

This variable is used only for the config importer of Elasticsearch (internal connection between docker services the elastic and the importer)

No

false

ES_MEMORY_LIMIT

String

RAM usage limit of Elasticsearch service

No

3G

ES_MEMORY_RESERVE

String

Reserved RAM for Elasticsearch service

No

500M

ES_PATH_REPO

String

The path to the repository in the container to store Elasticsearch backup snapshots

No

/backups/elasticsearch

Message Bus - Kafka

Kafka is a stream processing platform which groups like-messages together, such that the number of sequential writes to disk can be increased, thus effectively increasing database speeds.

Components

The message-bus-kafka package consists of a few components, those being Kafka, Kafdrop, and Kminion.

The services consuming from and producing to kafka might crash if Kafka is unreachable, so this is something to bear in mind when making changes to or restarting the kafka service.

Kafka

The core stream-processing element of the message-bus-kafka package.

Kafdrop

Kafdrop is a web user-interface for viewing Kafka topics and browsing consumer-groups.

Kminion

A prometheus exporter for Kafka.

Local Development

Kafka Topics Configuration

Using a config importer, Kafka's topics are imported to Kafka. The topics are specified using the KAFKA_TOPICS environment variable, and must be of syntax:

topic or topic:partition:replicationFactor

Using topics 2xx, 3xx, and metrics (partition=3, replicationFactor=1) as an example, we would declare:

KAFKA_TOPICS=2xx,3xx,metrics:3:1

where topics are separated by commas.

Accessing Kafdrop

Kafdrop - http://127.0.0.1:9013/

Dashboard Visualiser - Kibana

Kibana is a visualisation tool forming part of the Elastic (ELK) Stack for creating dashboards by querying data from ElasticSearch.

Local Development

Accessing the Service

Kibana - http://127.0.0.1:5601/

Importing Saved Objects

The config importer will import the file kibana-export.ndjson that exists in the folder <path to project packages>/dashboard-visualiser-kibana/importer.

The saved objects that will be imported are the index patterns and dashboards. If you made any changes to these objects please don't forget to export them and save the file kibana-export.ndjson under the folder specified above.

Environment Variables

Listed in this page are all environment variables needed to run Kibana.

Variable Name
Type
Relevance
Required
Default

ES_KIBANA_SYSTEM

String

ElasticSearch auth username

Yes

KIBANA_INSTANCES

Number

Number of service replicas

No

1

KIBANA_YML_CONFIG

String

Path to the service configuration file

No

kibana-kibana.yml

KIBANA_USERNAME

String

Service username

No

elastic

KIBANA_PASSWORD

String

Service password

No

dev_password_only

KIBANA_SSL

Boolean

SSL protocol requirement

No

True

KIBANA_CONFIG_FILE

String

Path to the dashboard import file

No

kibana-export.ndjson

KIBANA_MEMORY_LIMIT

String

RAM usage limit

No

3G

KIBANA_MEMORY_RESERVE

String

Reserved RAM

No

500M

Environment Variables

Listed in this page are all environment variables needed to run Ofelia.

The Ofelia service does not make use of any environment variables. However, when specifying jobs in the config.ini file(s) we can pass any environment variable in.

Example:

[job-run "mongo-backup"]
schedule= @daily
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= true

In the example above, OPENHIM_MONGO_URL is an environment variable.

FHIR Datastore HAPI FHIR

A FHIR compliant server for the platform.

The HAPI FHIR service will be used for two mandatory functionalities:

  • A validator of FHIR messages

  • A storage of FHIR message

A validator

Incoming messages from an EMR or Postman bundles are not always well structured and it may be missing required elements or be malformed.

HAPI FHIR will use a FHIR IG to validate these messages.

It will reject any invalid resources and it will return errors according to the IG.

HAPI FHIR is the first check to make sure the data injected in the rest of the system conforms to the requirements.

A storage

Backed by a PostgreSQL database, all the validated incoming messages will be stored.

This will allow HAPI FHIR to check for correct links and references between the resources, as well as another storage for backups in case the data is lost.

Kafka Unbundler Consumer

A kafka processor to unbundle resources into their own kafka topics.

The kafka unbundler will consume resources of topix 2xx from Kafka, split them according to their resource type and send them again to Kafka to new topics.

Each resource type has its own topic.

Link for github repo: .

Environment Variables

Listed in this page are all environment variables needed to run the Message Bus Kafka.

Variable Name
Type
Relevance
Required
Default

Local Development

A FHIR compliant server for the platform.

Instant OpenHIE FHIR Data Store Component

This component consists of two services:

  • Postgres

  • HAPI FHIR Server -

Accessing the services

HAPI FHIR

This service is accessible for testing via:

In a publicly accessible deployment this port should not be exposed. The OpenHIM should be used to access HAPI-FHIR.

Testing the HAPI FHIR Component

For testing this component we will be making use of curl for sending our request, but any client could be used to achieve the same result.

Execute the command below

Environment Variables

Listed in this page are all environment variables needed to run Superset.

Variable Name
Type
Relevance
Required
Default

Local Development

Accessing the Service

Superset -

Using the superset_config.py file

The Superset package is configured to contain a superset_config.py file, which Superset looks for, and subsequently activates the contained feature flags. For more information on the allowed feature flags, visit .

Importing & Exporting Assets

The config importer written in JS will import the file superset-export.zip that exists in the folder <path to project packages>/dashboard-visualiser-superset/importer/config. The assets that will be imported to Superset are the following:

  • The link to the Clickhouse database

  • The dataset saved from Clickhouse DB

  • The dashboards

  • The charts

If you made any changes to these objects please don't forget to export and save the file as superset-export.zip under the folder specified above. NB! It is not possible to export all these objects from the Superset UI, you can check the Postman collection: CARES DISI CDR -> Superset export assets and you will find two requests. To do the export, three steps are required:

  1. Run the Get Token Superset request to get the token (please make sure that you are using the correct request URL). An example of a response from Superset that will be displayed: { "access_token": "eyJ0eXAiOiJKV1...." }

  2. Copy the access token and put it into the second request Export superset assets in the Authorization section.

  3. Run the second request Export superset assets . You can save the response into a file called superset-export.zip under the folder specified above.

Your changes should then be saved.

KAFKA_INSTANCES

Number

Service replicas

No

1

KAFKA_CPU_LIMIT

Number

CPU usage limit

No

0

KAFKA_CPU_RESERVE

Number

Reserved CPU

No

0.05

KAFKA_MEMORY_LIMIT

String

RAM usage limit

No

3G

KAFKA_MEMORY_RESERVE

String

Reserved RAM

No

500M

KAFKA_TOPICS

String

Kafka topics

Yes

ZOOKEEPER_CPU_LIMIT

Number

CPU usage limit

No

0

ZOOKEEPER_CPU_RESERVE

Number

Reserved CPU

No

0.05

ZOOKEEPER_MEMORY_LIMIT

String

RAM usage limit

No

3G

ZOOKEEPER_MEMORY_RESERVE

String

Reserved RAM

No

500M

KMINION_CPU_LIMIT

Number

CPU usage limit

No

0

KMINION_CPU_RESERVE

Number

Reserved CPU

No

0.05

KMINION_MEMORY_LIMIT

String

RAM usage limit

No

3G

KMINION_MEMORY_RESERVE

String

Reserved RAM

No

500M

KAFDROP_CPU_LIMIT

Number

CPU usage limit

No

0

KAFDROP_CPU_RESERVE

Number

Reserved CPU

No

0.05

KAFDROP_MEMORY_LIMIT

String

RAM usage limit

No

3G

KAFDROP_MEMORY_RESERVE

String

Reserved RAM

No

500M

SUPERSET_USERNAME

String

Service username

No

admin

SUPERSET_FIRSTNAME

String

Admin account first name

No

SUPERSET

SUPERSET_LASTNAME

String

Admin account last name

No

ADMIN

SUPERSET_EMAIL

String

Admin account email address

No

admin@superset.com

SUPERSET_PASSWORD

String

Admin account password

No

admin

SUPERSET_API_USERNAME

String

Service username

No

admin

SUPERSET_API_PASSWORD

String

Service password

No

admin

SUPERSET_SSL

Boolean

SSL protocol requirement

No

False

CONFIG_FILE

String

Path to the dashboard import file

No

superset-export.zip

https://github.com/jembi/kafka-unbundler-consumer
curl http://127.0.0.1:3447/fhir/Patient
HAPI FHIR
http://127.0.0.1:3447
http://127.0.0.1:8089/
https://github.com/apache/superset/blob/master/RESOURCES/FEATURE_FLAGS.md

Local Development

Reverse proxy for secure and insecure nginx configurations.

Nginx Reverse Proxy

This package can be used to secure all of the data transfered to and from services using SSL encryption and also to generate SSL certificates as well.

Instead of configuring each package separately, we're using this package that will hold all of the Nginx configuration.

It will generate Staging or Production certificates from Let's Encrypt to ensure a secure connection (in case we require SSL to be enabled).

It is responsible for routing network traffic to the correct service.

Structure of Reverse Proxy Nginx package

The current package contains the following:

  • config: A folder that contains the general Nginx config for secure and insecure mode.

  • package-conf-insecure: A folder that contains all the insecure configs related to the services that need outside access.

  • package-conf-secure: A folder that contains all the secure configs related to the services that need outside access.

A job using Ofelia exists to renew the certificates automatically based on the certificate renewal period.

Adding new packages that require external access will require adding the Nginx config needed in this package.

Environment Variables

A kafka processor to unbundle resources into their own kafka topics.

Variable Name
Type
Relevance
Required
Default

KAFKA_HOST

String

Kafka hostname

No

kafka

KAFKA_PORT

Number

Kafka port

No

9092

Reverse Proxy Nginx

Reverse proxy for secure and insecure nginx configurations.

Environment Variables

Variable Name
Description
Default

OPENFN_DATABASE_URL

The URL of the PostgreSQL database

postgresql://openfn:instant101@postgres-1:5432/lightning_dev

OPENFN_DISABLE_DB_SSL

Whether to disable SSL for the database connection

true

OPENFN_IS_RESETTABLE_DEMO

Whether the application is running in resettable demo mode

true

OPENFN_LISTEN_ADDRESS

The IP address to listen on

0.0.0.0

OPENFN_LOG_LEVEL

The log level for the application

debug

OPENFN_ORIGINS

The allowed origins for CORS

http://localhost:4000

OPENFN_PRIMARY_ENCRYPTION_KEY

The primary encryption key

KLu/IoZuaf+baDECd8wG4Z6auwNe6VAmwh9N8lWdJ1A=

OPENFN_SECRET_KEY_BASE

The secret key base

jGDxZj2O+Qzegm5wcZ940RfWO4D6RyU8thNCr5BUpHNwa7UNV52M1/Sn+7RxiP+f

OPENFN_WORKER_RUNS_PRIVATE_KEY

The private key for worker runs

LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRREVtR3drUW5pT0hqVCsKMnkyRHFvRUhyT3dLZFI2RW9RWG9DeDE4MytXZ3hNcGthTFZyOFViYVVVQWNISGgzUFp2Z2UwcEIzTWlCWWR5Kwp1ajM1am5uK2JIdk9OZGRldWxOUUdpczdrVFFHRU1nTSs0Njhldm5RS0h6R29DRUhabDlZV0s0MUd5SEZCZXppCnJiOGx2T1A1NEtSTS90aE5pVGtHaUIvTGFLMldLcTh0VmtoSHBvaFE3OGIyR21vNzNmcWtuSGZNWnc0ZE43d1MKdldOamZIN3QwSmhUdW9mTXludUxSWmdFYUhmTDlnbytzZ0thc0ZUTmVvdEZIQkYxQTJjUDJCakwzaUxad0hmdQozTzEwZzg0aGZlTzJqTWlsZlladHNDdmxDTE1EZWNrdFJGWFl6V0dWc25FcFNiOStjcWJWUXRvdEU4QklON09GClRmaEx2MG9uQWdNQkFBRUNnZ0VBV3dmZyt5RTBSVXBEYThiOVdqdzNKdUN4STE1NzFSbmliRUhKVTZzdzNyS0EKck9HM0w5WTI0cHhBdlVPSm5GMFFzbThrUVQ4RU1MU3B6RDdjdDVON2RZMngvaGY4TThhL0VSWXM4cFlYcXI5Vwpnbnh3NldGZ0R6elFHZ0RIaW0raXNudk5ucFdEbTRGVTRObG02d2g5MzVSZlA2KzVaSjJucEJpZjhFWDJLdE9rCklOSHRVbFcwNFlXeDEwS0pIWWhYNFlydXVjL3MraXBORzBCSDZEdlJaQzQxSWw0N1luaTg1OERaL0FaeVNZN1kKWTlTamNKQ0QvUHBENTlNQjlSanJDQjhweDBjWGlsVXBVZUJSYndGalVwbWZuVmhIa1hiYlM1U0hXWWM4K3pLRQp2ajFqSEpxc2UyR0hxK2lHL1V3NTZvcHNyM2x3dHBRUXpVcEJGblhMMFFLQmdRRDM5bkV3L1NNVGhCallSd1JGCkY2a2xOYmltU2RGOVozQlZleXhrT0dUeU5NSCtYckhsQjFpOXBRRHdtMit3V2RvcWg1ZFRFbEU5K1crZ0FhN0YKbXlWc2xPTW4wdnZ2cXY2Wkp5SDRtNTVKU0lWSzBzRjRQOTRMYkpNSStHUW5VNnRha3Y0V0FSMkpXaURabGxPdAp3R01EQWZqRVIrSEFZeUJDKzNDL25MNHF5d0tCZ1FESzk3NERtV0c4VDMzNHBiUFVEYnpDbG9oTlQ2UldxMXVwCmJSWng4ZGpzZU0vQ09kZnBUcmJuMnk5dVc3Q1pBNFVPQ2s4REcxZ3ZENVVDYlpEUVdMaUp5RzZGdG5OdGgvaU8KT1dJM0UyczZOS0VMMU1NVzh5QWZwNzV4Ung5cnNaQzI2UEtqQ0pWL2lTVjcyNlQ1ZTFzRG5sZUtBb0JFZnlDRgpvbEhhMmhybWxRS0JnUURHT1YyOWd1K1NmMng1SVRTWm8xT1ZxbitGZDhlZno1d3V5YnZ3Rm1Fa2V1YUdXZDh1CnJ4UFM3MkJ6K0Y1dUJUWngvMWtLa0w4Zm94TUlQN0FleW1zOWhUeWVybnkyMk9TVlBJSmN3dExqMUxTeDN3L0kKK0kyaVpsYVl1akVlZXpXbHY1S2R0cUNORjk3Zzh0ck1NTnMySVZKa1h1NXFwUk82V0ZXRzZGL2h4d0tCZ0hnNApHYUpFSFhIT204ekZTU2lYSW5FWGZKQmVWZmJIOUxqNzFrbVRlR3RJZTdhTlVHZnVxY1BYUGRiZUZGSHRsY2ZsCkx6dWwzS3V6VFExdEhGTnIyWkl5MTlQM1o1TSs4R2c5Y1FFeVRWYmlpV2xha2x0cmttRnRtQTI4bE0zVEZPWmkKUUNWMUZpZStjaWRVeC9qRnFma1F0c1VXQ2llSUxSazZOY1d0WGpXcEFvR0JBTGN6Y210VGlUUEFvWnk0MFV1QQpTOXpUd3RsamhmUWJEVTVjb21EcnlKcnFRU0VOdmQ2VW5HdW0zYVNnNk13dDc0NGxidDAyMC9mSGI0WTJkTGhMCmx4YWJ5b1dQUElRRUpLL1NNOGtURFEvYTRyME5tZzhuV3h5bGFLcHQ5WUhmZ2NYMkYzSzUrc0VSUGNFcVZlWFMKdWZkYXdYQVlFampZK3V2UHZ2YzU3RU1aCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K

OPENFN_WORKER_SECRET

The secret key for the worker

secret_here

POSTGRES_USER

The username for the PostgreSQL database

postgres

POSTGRES_SERVICE

The service name for the PostgreSQL database

postgres-1

POSTGRES_DATABASE

The name of the PostgreSQL database

postgres

POSTGRES_PASSWORD

The password for the PostgreSQL database

instant101

POSTGRES_PORT

The port number for the PostgreSQL database

5432

OPENFN_POSTGRESQL_DB

The name of the OpenFn PostgreSQL database

lightning_dev

OPENFN_POSTGRESQL_USERNAME

The username for the OpenFn PostgreSQL database

openfn

OPENFN_POSTGRESQL_PASSWORD

The password for the OpenFn PostgreSQL database

instant101

OPENFN_WORKER_LIGHTNING_PUBLIC_KEY

The public key for the worker lightning

LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF4SmhzSkVKNGpoNDAvdHN0ZzZxQgpCNnpzQ25VZWhLRUY2QXNkZk4vbG9NVEtaR2kxYS9GRzJsRkFIQng0ZHoyYjRIdEtRZHpJZ1dIY3ZybzkrWTU1Ci9teDd6alhYWHJwVFVCb3JPNUUwQmhESURQdU92SHI1MENoOHhxQWhCMlpmV0ZpdU5Sc2h4UVhzNHEyL0piemoKK2VDa1RQN1lUWWs1Qm9nZnkyaXRsaXF2TFZaSVI2YUlVTy9HOWhwcU85MzZwSngzekdjT0hUZThFcjFqWTN4Kwo3ZENZVTdxSHpNcDdpMFdZQkdoM3kvWUtQcklDbXJCVXpYcUxSUndSZFFObkQ5Z1l5OTRpMmNCMzd0enRkSVBPCklYM2p0b3pJcFgyR2JiQXI1UWl6QTNuSkxVUlYyTTFobGJKeEtVbS9mbkttMVVMYUxSUEFTRGV6aFUzNFM3OUsKSndJREFRQUIKLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg

OPENFN_IMAGE

The image name for OpenFn

openfn/lightning:v2.9.5

OPENFN_WORKER_IMAGE

The image name for OpenFn worker

openfn/ws-worker:latest

OPENFN_KAFKA_TRIGGERS_ENABLED

Whether Kafka triggers are enabled

true

OPENFN_API_KEY

The API key for OpenFn

apiKey

OPENFN_ENDPOINT

The endpoint for OpenFn

http://localhost:4000

OPENFN_DOCKER_WEB_CPUS

The number of CPUs allocated to the web container

2

OPENFN_DOCKER_WEB_MEMORY

The amount of memory allocated to the web container

4G

OPENFN_DOCKER_WORKER_CPUS

The number of CPUs allocated to the worker container

2

OPENFN_DOCKER_WORKER_MEMORY

The amount of memory allocated to the worker container

4G

FHIR_SERVER_BASE_URL

The base URL for the FHIR server

http://openhim-core:5001

FHIR_SERVER_USERNAME

The username for the FHIR server

openfn_client

FHIR_SERVER_PASSWORD

The password for the FHIR server

openfn_client_password

Environment Variables

Listed in this page are all environment variables needed to run Hapi-proxy.

Variable Name
Type
Relevance
Required
Default

HAPI_SERVER_URL

String

Hapi-fhir server URL

No

http://hapi-fhir:8080/fhir

KAFKA_BOOTSTRAP_SERVERS

String

Kafka server

No

kafka:9092

HAPI_SERVER_VALIDATE_FORMAT

String

Path to the service configuration file

No

kibana-kibana.yml

HAPI_PROXY_INSTANCES

Number

Number of instances of hapi-proxy

No

1

HAPI_PROXY_CPU_LIMIT

Number

CPU usage limit

No

0

HAPI_PROXY_CPU_RESERVE

Number

Reserved CPU usage

No

0.05

HAPI_PROXY_MEMORY_LIMIT

String

RAM usage limit

No

3G

HAPI_PROXY_MEMORY_RESERVE

String

Reserved RAM

No

500M

OpenFn

Introduction

Welcome to the documentation for the openfn package! This package is designed to provide a platform for seamless integration and automation of data workflows. Whether you are a developer, data analyst, or data scientist, this package will help you streamline your data processing tasks.

Usage

Once you have added the openfn package, you can start using it in your projects. Here is how to instantiate the package

instant package init -n openfn --dev

Demo

To get a hands-on experience with the openfn package, try out the demo. The demo showcases the package's capabilities and provides a sample project used to export data from CDR to NDR with transformations. It utilizes a Kafka queue and a custom adapter to map Bundles to be compliant with the FHIR Implementation Guide (IG).

Getting Started

To access the demo, follow these steps:

  1. Visit the OpenFn Demo website.

  2. Use the following demo credentials

username: root@openhim.org
password: instant101
  1. Configure the Kafka trigger Change the trigger type from webhook to “Kafka Consumer” Enter in configuration details → see docs Kafka topic: {whichever you want to use} (e.g., “cdr-ndr”) Hosts: {cdr host name} Initial offset reset policy: earliest Connection timeout: 30 (default value, but can be adjusted) Warning: Check Disable this trigger to ensure that consumption doesn’t start until you are ready to run the workflow! Once unchecked, it will immediately start consuming messages off the topic.

Documentation

For more detailed information on the openfn package and its functionalities, please refer to the official documentation. The documentation covers various topics, including installation instructions, usage guidelines, and advanced features.

Guides

Various notes and guide

Message Bus Helper Hapi Proxy

A helper package for the Kafka message bus.

A helper for Kafka message bus service, It sends data to the HAPI FHIR datastore and then to the Kafka message bus based on the response from HAPI FHIR.

More particularly:

  1. It receives messages from OpenHIM

  2. It sends the data to the HAPI FHIR server and waits for the response

  3. It gets the response. According to the response status, it will send the message to the topic that corresponds to that status (2xx, 4xx, 5xx, ... )

  4. It will send back the response from HAPI FHIR to OpenHIM as well

Provisioning remote servers

Infrastructure tools for the OpenHIM Platform

Deploying from your local environment to a remote server or cluster is easy. All you have to do is ensure the remote servers are setup as a Docker Swarm cluster. Then, from your local environment you may target a remote environment by using the `DOCKER_HOST` env var. e.g.

Setting up new servers

In addition, as part of the OpenHIM Platform Github repository we also provide scripts to easily setup new servers. The Terraform script are able to instantiate server in AWS and the Ansible script are able to configure those server to be ready to accept OpenHIM Platform packages.

Ansible

See .

It is used for:

  • Adding users to the remote servers

  • Provision of the remote servers in single and cluster mode: user and firewall configurations, docker installation, docker authentication and docker swarm provision.

All the passwords are saved securely using Keepass.

In the inventories, there is different environment configuration (development, production and staging) that contains: users and their ssh keys list, docker credentials and definition of the hosts.

Terraform

Is used to create and set AWS servers. See .

Reverse Proxy Traefik

Reverse proxy for secure traefik configurations.

Reverse Proxy Traefik

Reverse Proxy Traefik

The package is an alternative reverse proxy Nginx, this reverse proxy exposes packages using both subdomains and subdirectories to host the following services:

Package
Hosted

Please ensure that the ENV "DOMAIN_NAME_HOST_TRAEFIK" is set, in this documentation we will be using the placeholder "domain" for its value

Subdomain-Based Reverse Proxy

The following packages do not support subdomains and require the use of domain/subdomain to access over the reverse proxy

Superset

Set the following environment variable in the package-metadata.json in the "./dashboard-visualiser-superset" directory

Jempi

Set the following environment variables in the package-metadata.json in the "./client-registry-jempi" directory

Santempi

Set the following environment variables in the package-metadata.json in the "./client-registry-santempi" directory

Enabling Kibana

Set the following environment variables in the package-metadata.json in the "./dashboard-visualiser-kibana" directory

Subdirectory

Enabling Minio

Set the following environment variables in the package-metadata.json in the "monitoring" directory

MinIO Configuration

The MinIO server is configured to run with the following port settings:

  • API Port: 9090

  • Console Port: 9001

Ensure that your Traefik configuration reflects these ports to properly route traffic to the MinIO services. The API can be accessed at https://<domain>/minio and the Console at https://<domain>/minio-console.

Update your Traefik labels in the docker-compose.yml to match these settings:

Enabling Grafana

Set the following environment variables in the package-metadata.json in the "monitoring" directory

JS Report

Set the following environment variables in the package-metadata.json in the "dashboard-visualiser-jsreport" directory

OpenHIM

Set the following environment variables in the package-metadata.json in the "./interoperability-layer-openhim" directory

Note: Only the Backend services are accessible through subdirectory paths, not the frontend

Cheat sheet

This page gives a list of common command and examples for easy reference

Install the latest Instant OpenHIE binary locally:

Launch a particular package (with metadata initialisation):

Stop a particular package:

Start a particular package (WITHOUT metadata initialisation):

Destroy (delete all data too) a particular package:

Launch a particular recipe (with metadata initialisation) using profiles (which are defined in the config.yaml file):

Stop a particular recipe:

Start a particular recipe (WITHOUT metadata initialisation):

Destroy (delete all data too) a particular recipe:

Add --dev to any `instant` command to expose development ports to the host for packages

Architecture

OpenHIM Platform builds on Instant OpenHIE v2 as the deployment tool which provides the concepts of packages, profiles and the CLI utility that powers the ability to launch OpenHIM Platform packages and recipes. Please read the the for some foundational concepts about how packages are run.

On this page we will discuss the architecture of OpenHIM Platform which is a set of packages and recipes that use those packages that create a fully functional HIE from scratch.

Modularity and flexibility

OpenHIM Platform packages are able to stood up individually so that as much or as little of the package set that is necessary from an implementer can be utilised. However, OpenHIM Platform is designed with some key features that will only be available if a number of packages are setup together. Recipes group those sets of packages together so that it is easier to deploy all at once.

Component Architecture

The architecture is split into 3 distinct parts:

  • Client: This section represents client systems that might want to interact with the HIE, they have a particular set of interactions that the core OpenHIM Platform packages enable support for.

  • Platform core: These are a set of packages with instantiate applications and mediators that enable the core client workflows to be executed. This includes accepting FHIR bundles, splitting patient demographic data into an MPI and clinical data into a FHIR data store as well as managing the linkage from patient demographics to clinical data in a way that isn't affected by potencial linking and unlinking of patient records via the MPI. More on this later.

  • Platform pluggable services: by design, once the core packages have processed FHIR request, they are pushed into Kafka for secondary use by other systems. Typically this includes data analytics pipelines and a default implementation is included in OpenHIM Platform, however, the data can be read an used for any purpose i.e. syncing to another HIE or sent to a national data warehouse.

Platform core wokflow

This is how the core packages interact to split the data into two separate stores, a clinical data store and a patient demographics store.

The reason for doing this are as follows:

  • With the clinical and patient demographic data split it is easier to link and unlink patient identities as no data in the clinical store needs to change. They continue to reference the source patient ID and whatever happen to that patient, whether they are grouped together with other identities in the MPI or not, that ID remains constant.

  • The split of data is a useful security feature as the clinical data and the Personal Identifiable Information (PII) or stored separately. An attacker would need to compromise both to relate clinical information to a particular person.

  • It prevent duplicate information being stored in multiple places, a clear source of truth for each type of information is identified. This prevent data from getting out of sync when it is stored in multiple places.

Superset

Sub Domain (e.g. superset.)

Jempi

Sub Domain (e.g. jempi.)

Santempi

Sub Domain (e.g. santempi.)

Kibana

Sub Domain (e.g. kibana.)

Minio

Sub Directory (e.g. /minio)

Grafana

Sub Directory (e.g. /grafana)

JSReport

Sub Directory (e.g. /jsreport)

OpenHim

Sub Domain (Frontend) Sub Directory (Backend) (e.g. openhim. and openhim./openhimcore)

"environmentVariables":
{
# Other Configurations
...
    "SUPERSET_TRAEFIK_SUBDOMAIN": "superset"
}
"environmentVariables":
{
# Other Configurations
...
    "REACT_APP_JEMPI_BASE_API_HOST": "jempi-api.domain",
    "REACT_APP_JEMPI_BASE_API_PORT": "443",
    "JEMPI_API_TRAEFIK_SUBDOMAIN": "jempi-api",
    "JEMPI_WEB_TRAEFIK_HOST_NAME": "jempi-web",
}
"environmentVariables":
{
# Other Configurations
...
    "SANTEDB_WWW_TRAEFIK_SUBDOMAIN": "santewww",
    "SANTEDB_MPI_TRAEFIK_SUBDOMAIN": "santempi"
}

"environmentVariables":
{
# Other Configurations
...
    "KIBANA_TRAEFIK_SUBDOMAIN": "kibana"
}
"environmentVariables":
{
# Other Configurations
...
    "MINIO_BROWSER_REDIRECT_URL": "https://domain/minio-console/"
}
# API Configuration
- traefik.http.services.minio.loadbalancer.server.port=9090
# Console Configuration
- traefik.http.services.minio-console.loadbalancer.server.port=9001

"environmentVariables":
{
# Other Configurations
...
    "KC_GRAFANA_ROOT_URL": "%(protocol)s://%(domain)s/grafana/",
    "GF_SERVER_DOMAIN": "domain",
    "GF_SERVER_SERVE_FROM_SUB_PATH": "true",
}
"environmentVariables":
{
# Other Configurations
...
    "JS_REPORT_PATH_PREFIX": "/jsreport"
}
"environmentVariables":
{
# Other Configurations
...
    "OPENHIM_SUBDOMAIN": "domain",
    "OPENHIM_CONSOLE_BASE_URL": "http://domain"
    "OPENHIM_CORE_MEDIATOR_HOSTNAME": "domain/openhimcomms",
    "OPENHIM_MEDIATOR_API_PORT": "443"
}
sudo curl -L https://github.com/openhie/instant-v2/releases/latest/download/instant-linux -o /usr/local/bin/instant
instant package init -n <package_name>
instant package down -n <package_name>
instant package up -n <package_name>
instant package destroy -n <package_name>
instant package init -p <profile_name>
instant package down -p <profile_name>
instant package up -p <profile_nameage_name>
instant package destroy -p <profile_name>
instant package init ... --dev
DOCKER_HOST=ssh://ubuntu@<ip> instant package init ...
here
here

Disaster Recovery Process

Backup & restore process.

Two major procedures should exist in order to recover lost data:

  • Creating backups continuously

  • Restoring the backups

This includes the different databases: MongoDB, PostgreSQL DB and Elasticsearch.

The current implementation will create continuous backups for MongoDB (to backup all the transactions of OpenHIM) and PostgreSQL (to backup the HAPI FHIR data) as follows:

  • Daily backups (for 7 days rotation)

  • Weekly backups (for 4 weeks rotation)

  • Monthly backups (for 3 months rotation)

More details on each service backup & restore pages.

Environment Variables

Listed in this page are all environment variables needed to run Reverse Proxy Nginx.

Variable Name
Type
Relevance
Required
Default

DOMAIN_NAME

String

Domain name

Yes

localhost

SUBDOMAINS

String

Subdomain names

Yes

RENEWAL_EMAIL

String

Renewal email

Yes

REVERSE_PROXY_INSTANCES

Number

Number of instances

No

1

STAGING

String

Generate fake or real certificate (true for fake)

No

false

NGINX_CPU_LIMIT

Number

CPU usage limit

No

0

NGINX_CPU_RESERVE

Number

Reserved CPU

No

0.05

NGINX_MEMORY_LIMIT

String

RAM usage limit

No

3G

NGINX_MEMORY_RESERVE

String

Reserved RAM

No

500M

HAPI FHIR Data

FHIR messages Backup & Restore.

Validated messages from HAPI FHIR will be stored in PostgreSQL database.

The following content will detail the backup and restore process of this data.

Backups

This section assumes Postgres backups are made using pg_basebackup

Postgres (Hapi-FHIR)

To start up HAPI FHIR and ensure that the backups can be made, ensure that you have created the HAPI FHIR bind mount directory (eg./backup)

Disaster Recovery

NB! DO NOT UNTAR OR EDIT THE FILE PERMISSIONS OF THE POSTGRES BACKUP FILE

Postgres (HAPI FHIR)

Preliminary steps:

  1. Do a destroy of fhir-datastore-hapi-fhir using the CLI binary (./platform-linux for linux)

  2. Make sure the Postgres volumes on nodes other than the swarm leader have been removed as well! You will need to ssh into each server and manually remove them.

  3. Do an init of fhir-datastore-hapi-fhir using the CLI binary

After running the preliminary steps, run the following commands on the node hosting the Postgres leader:

NOTE: The value of the REPMGR_PRIMARY_HOST variable in your .env file indicates the Postgres leader

  1. Retrieve the Postgres leader's container-ID using: docker ps -a. Hereafter called postgres_leader_container_id

  2. Run the following command: docker exec -t <postgres_leader_container_id> pg_ctl stop -D /bitnami/postgresql/data

  3. Wait for the Postgres leader container to die and start up again. You can monitor this using: docker ps -a

  4. Run the following command: docker rm <postgres_leader_container_id>

  5. Retrieve the new Postgres leader's container-ID using docker ps -a, be weary to not use the old postgres_leader_container_id

  6. Retrieve the Postgres backup file's name as an absolute path (/backups/postgresql_xxx). Hereafter called backup_file

  7. Run the following commands in the order listed :

    # Stop the server running in the container
    docker exec -t <postgres_leader_container_id> pg_ctl stop -D /bitnami/postgresql/data
    
    # Clear the contents of /bitnami/postgresql/data
    docker exec -t --user root <postgres_leader_container_id> sh -c 'cd /bitnami/postgresql/data && rm -rf $(ls)'
    
    # Copy over the base.tar file
    sudo docker cp <backup_file>/base.tar <postgres_leader_container_id>:/bitnami/postgresql
    
    # Extract the base.tar file
    docker exec -t --user root <postgres_leader_container_id> sh -c 'tar -xf /bitnami/postgresql/base.tar --directory=/bitnami/postgresql/data'
    
    # Copy over the pg_wal.tar file
    sudo docker cp <backup_file>/pg_wal.tar <postgres_leader_container_id>:/bitnami/postgresql
    
    # Extract pg_wal.tar
    docker exec -t --user root <postgres_leader_container_id> sh -c 'tar -xf /bitnami/postgresql/pg_wal.tar --directory=/bitnami/postgresql/data/pg_wal'
    
    # Copy conf dir over
    docker exec -t --user root <postgres_leader_container_id> sh -c 'cp -r /bitnami/postgresql/conf/. /bitnami/postgresql/data'
    
    # Set pg_wal.tar permissions
    docker exec -t --user root <postgres_leader_container_id> sh -c 'cd /bitnami/postgresql/data/pg_wal && chown -v 1001 $(ls)'
    
    # Start the server
    docker exec -t <postgres_leader_container_id> pg_ctl start -D /bitnami/postgresql/data
  8. Do a down of fhir-datastore-hapi-fhir using the CLI binary Example: ./instant-linux package down -n=fhir-datastore-hapi-fhir --env-file=.env.*

  9. Wait for the down operation to complete

  10. Do an init of fhir-datastore-hapi-fhir using the CLI binary Example: ./instant-linux package init -n=fhir-datastore-hapi-fhir --env-file=.env.*

Postgres should now be recovered

Note: After performing the data recovery, it is possible to get an error from HAPI FHIR (500 internal server error) while the data is still being replicated across the cluster. Wait a minute and try again.

Terraform

A tool that enables infrastructure as code to set up servers in AWS EC2.

Cloud Dev environments

To set up a developer's development environment in AWS, run this terraform project. The scripts will allow the joining of an existing VPC, the creation of a public subnet and a variable number of EC2 instances that the user will have SSH access to. Alarms have been created in the scripts which will auto-shutdown the instances after a configurable period, based on CPU metrics. A Lambda scheduled event can also be configured which can run at a regular interval to shut down any instances that may still be running.

Pre-requisites

  • Install AWS CLI

  • Install Terraform

Creating a VPC

This should only be done once per AWS account as there is a limit of 5 per region. Please check if this has already been run and use the existing VPC_ID and SUBNET_ID for the following section if it does and skips to the next section.

Navigate to the infrastructure/terraform/vpc directory

Initialize Terraform project:

terraform init

Execute the following:

terraform apply

Copy the output for the next step, e.g for ICAP this has already been run and this is the result:

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

SUBNET_ID = "subnet-0004b0dacb5862d59"
VPC_ID = "vpc-067ab69f374ac9f47"

Creating EC2 instances

Navigate to the infrastructure/terraform directory

Initialize Terraform project:

terraform init

The following properties have to be set:

PUBLIC_KEY_PATH - path to the user's public key file that gets injected into the servers created
PROJECT_NAME    - unique project name that is used to identify each VPC and its resources
HOSTED_ZONE_ID  - (only if you are creating domains, which by default you are) the hosted zone to use, this must be created in the AWS console
DOMAIN_NAME     - the base domain name to use
SUBNET_ID       - the subnet id to use, copy this from the previous step
VPC_ID          - the subnet id to use, copy this from the previous step

The configuration can be done using an terraform variable file. Create a file called my.tfvars. Below is an example that illustrates the structure of the environment variables file. This example is of a configuration that you can use for the ICAP CDR. Please replace {user} with your own user.

PUBLIC_KEY_PATH = "/home/{user}/.ssh/id_rsa.pub"
PROJECT_NAME = "jembi_platform_dev_{user}"
HOSTED_ZONE_ID = "Z00782582NSP6D0VHBCMI"
DOMAIN_NAME = "{user}.jembi.cloud"
SUBNET_ID = "subnet-0004b0dacb5862d59"
VPC_ID = "vpc-067ab69f374ac9f47"

The AWS account to be used is defined in the ~/.aws/credentials file. If you don't have file this make sure you have configured the AWS CLI.

cat ~/.aws/credentials
[default]
aws_access_key_id = AKIA6FOPGN5TYHXXXXX
aws_secret_access_key = Qf7E+qcXXXXXXQh4XznN4MM8qR/VP/SXgXXXXX
[jembi-sandbox]
aws_access_key_id = AKIASOHFAV527JCXXXXX
aws_secret_access_key = YXFu3XxXXXXXTeNXdUtIg0gb9Ro7gJ89XXXXX
[jembi-icap]
aws_access_key_id = AKIAVFN7GJJFS6LXXXXX
aws_secret_access_key = b2I6jhwXXXXX4YehBCx/7rKl1JZjYdbtXXXXX

The sample file above has access to 3 accounts and the options for <account_name> could be "default", "jembi-sandbox", "jembi-icap"

Optionally, add ACCOUNT = "<account_name>" to my.tfvars if you want to use something other than default.

The flag for specifying an environment variables file is -var-file, create the AWS stack by running:

terraform apply -var-file my.tfvars

Once the script has run successfully, the ip addresses and domains for the servers will be displayed:

Apply complete! Resources: 13 added, 0 changed, 0 destroyed.

Outputs:

domains = {
  "domain_name" = "{user}.jembi.cloud"
  "node_domain_names" = [
    "node-0.{user}.jembi.cloud",
    "node-1.{user}.jembi.cloud",
    "node-2.{user}.jembi.cloud",
  ]
  "subdomain" = [
    "*.{user}.jembi.cloud",
  ]
}
public_ips = [
  "13.245.143.121",
  "13.246.39.101",
  "13.246.39.92",
]

SSH access should be now available - use the default 'ubuntu' user - ssh ubuntu@<ip_address>

Destroying the AWS stack - run:

terraform destroy -var-file my.tfvars

Resource Allocations

Allot CPU and RAM resources to services, per service, per server.

What it Means

CPU

CPU allocations are specified as a portion of the total number of cores on the host system, i.e., a CPU limit of 2 in a 6-core system is an effective limit of 33.33% of the CPU, and a CPU limit of 6 in a 6-core system is an effective limit of 100% of the CPU.

RAM

Memory allocations are specified as a number followed by their multiplier, i.e., 500M, 1G, 10G, etc.

Defaults

As a default, each package contained in Platform is allocated a maximum of 3 GB of RAM, and 100% CPU usage.

Allocating Resources per Package

The resource allocation can be set on a per-package basis, as specified by the relevant environment variables found in the relevant Packages section.

Notes

  • Be wary of allocating CPU limits to ELK Stack services. These seem to fail with CPU limits and their already implemented health checks.

  • Take note to not allocate less memory to ELK Stack services than their JVM heap sizes.

  • Exit code 137 indicates an out-of-memory failure. When running into this, it means that the service has been allocated too little memory.

Ansible

A tool that enables infrastructure as code for provision of the servers.

Platform Deploy

Prerequisites

  • Linux OS to run commands

  • Install Ansible (as per https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html)

  • Ansible Docker Community Collection installed

  • ansible-galaxy collection install community.docker

Infrastructure and Servers

Please see the /inventories/{ENVIRONMENT}/hosts file for IP details of the designated servers. Set these to the server that you created via Terraform or to an on-premises server.

Ansible

SSH Access

To authenticate yourself on the remote servers your ssh key will need to be added to the sudoers var in the /inventories/{ENVIRONMENT}/group_vars/all.yml.

To have docker access you need to add your ssh key to the docker_users var in the /inventories/{ENVIRONMENT}/group_vars/all.yml file.

An authorised user will need to run the provision_servers.yml playbook to add the SSH key of the person who will run the Ansible scripts to the servers.

Configuration

Before running the ansible script add the server to your known_hosts file else ansible will throw an error, for each server run:

ssh-keyscan -H <host> >> ~/.ssh/known_hosts

To run a playbook you can use:

ansible-playbook \
  --ask-vault-pass \
  --become \
  --inventory=inventories/<INVENTORY> \
  --user=ubuntu \
  playbooks/<PLAYBOOK>.yml

Alternatively, to run all provisioning playbooks with the development inventory (most common for setting up a dev server), use:

ansible-playbook \
  --ask-vault-pass \
  --become \
  --inventory=inventories/development \
  --user=ubuntu \
  playbooks/provision.yml

Vault

The vault password required for running the playbooks can be found in the database.kdbx KeePass file.

To encrypt a new secret with the Ansible vault run:

echo -n '<YOUR SECRET>' | ansible-vault encrypt_string

The New password is the original Ansible Vault password.

Keepass

Copies of all the passwords used here are kept in the encrypted database.kdbx file.

Please ask your admin for the decryption password of the database.kdbx file.

OpenHIM Platform Architecture

Elasticsearch

Elasticsearch Backup & Restore.

Elasticsearch Backups

For detailed steps about creating backups see: .

Elasticsearch offers the functionality to save a backup in different ways, for further understanding, you can use this link: .

Elasticsearch Restore

To see how to restore snapshots in Elasticsearch: .

Environment Variables

The following environment variables can be used to configure Traefik:

Variable
Value
Description

Community

We encourage any contributions and suggestions! If you would like to get involved, please visit us on . Feel free to submit an issue or to create a PR to see your features included in the project.

If you'd like to chat about OpenHIM Platform please join our .

We look forward to growing the set of capabilities within OpenHIM Platform together!

CERT_RESOLVER

le

The certificate resolver to use for obtaining TLS certificates.

CA_SERVER

https://acme-v02.api.letsencrypt.org/directory

The URL of the ACME server for certificate generation.

TLS

true

Enable or disable TLS encryption.

TLS_CHALLENGE

http

The challenge type to use for TLS certificate generation.

WEB_ENTRY_POINT

web

The entry point for web traffic.

REDIRECT_TO_HTTPS

true

Enable or disable automatic redirection to HTTPS.

Snapshot filesystem repository docs
Register a snapshot repository docs
Snapshot Restore docs
Github
community on Discord

Config Importing

This section defines the configuration importing methods used in the Platform

Overview

Certain packages in the Platform require configuration to enable their intended functionality in a stack. For instance, the OpenHIM package requires the setting of users, channels, roles, and so on. Other packages, such as JS Report or Kibana, require importing of pre-configured dashboards stored in compressed files.

Most services in the Platform can be configured by sending a request containing the required configuration files to the relevant service API. To achieve this, the Platform leverages a helper container to make that API call.

If a package uses a config importer, its configuration can be found in the relevant package's importer section.

The Helper Container

The Process

As part of the package-launching process, the to-be-configured service is deployed, then awaits configuring. Before the configuration can take place, the relevant service is waited upon for joining to the Docker internal network. Once the service has joined the network, the helper container is launched and makes the API request to configure the service.

Images

jembi/api-config-importer

For reference on how to use the jembi/api-config-importer image, see the repo here.

jembi/instantohie-config-importer

For reference on how to use the jembi/instantohie-config-importer image, see the repo here.

OpenHIM Data

OpenHIM backup & restore

OpenHIM transaction logs and other data is stored in the Mongo database. Restoring this data means restoring all the history of transactions which mandatory to recover in case something unexpected happened and we lost all the data.

In the following sections, we will cover:

  • Already implemented jobs to create backups periodically

  • How to restore the backups

Backup & Restore

Single node

Single node restore docs

The following job may be used to set up a backup job for a single node Mongo:

[job-run "mongo-backup"]
schedule= @every 24h
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= true

Cluster

Cluster restore docs

The following job may be used to set up a backup job for clustered Mongo:

[job-run "mongo-backup"]
schedule= @every 24h
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= true

Restore

In order to restore from a backup you would need to launch a Mongo container with access to the backup file and the mongo_backup network by running the following command:

docker run -d --network=mongo_backup --mount type=bind,source=/backups,target=/backups mongo:4.2

Then exec into the container and run mongorestore:

mongorestore --uri="mongodb://mongo-1:27017,mongo-2:27017,mongo-3:27017/openhim?replicaSet=mongo-set" --gzip --archive=/backups/<NAME_OF_BACKUP_FILE>

The data should be restored.

Performance Testing

The performance scripts are located in the test folder. To run this script against a local or remote server.

Steps

  1. Make sure you have the necessary dependencies installed, more importantly, the k6 binary. Refer to this documentation Building a k6 binary

  2. Set the [BASE_URL] variable to the URL of your server. By default, it is set to "http://localhost:5001", but you can change it to the appropriate URL.

  3. If there are any additional dependencies or configurations required by the [generateBundle] function or any other imported modules, make sure those are set up correctly.

  4. Open your terminal or command prompt and navigate to the directory where the scripts are located, e.g. load.js

  5. Run the script using the k6 run command followed by the filename. In this case, you would run [k6 run load.js]

  6. The script will start executing and sending HTTP POST requests to the specified server. The requests will be sent at a constant arrival rate defined in the [options] object

  7. The script includes some thresholds defined in the [options] object. These thresholds define the performance criteria for the script. If any of the thresholds are exceeded, the script will report a failure.

  8. Monitor the output in the terminal to see the results of the script execution. It will display information such as the number of virtual users (VUs), request statistics, and any failures that occurred.

  9. To visualize the output in grafana, run the k6 scripts with the following environment variables and flag set K6_PROMETHEUS_RW_SERVER_URL=http://localhost:9090/api/v1/write && ./k6 run -o experimental-prometheus-rw script.js

Sample load test result

The test results were obtained from running on Ubuntu 22.04 OS, 64GB RAM and 12 Cores. ✓ status code is 200

Metric
Value

checks

100.00% ✓ 188 ✗ 0

data_received

2.3 MB 39 kB/s

data_sent

3.9 MB 65 kB/s

dropped_iterations

1613 26.656141/s

http_req_blocked

avg=8.32µs min=3.54µs med=5.21µs max=259.88µs p(90)=6.87µs p(95)=8.18µs

http_req_connecting

avg=1.61µs min=0s med=0s max=153.25µs p(90)=0s p(95)=0s

http_req_duration

avg=619.01ms min=421.78ms med=621.54ms max=812.9ms p(90)=692.07ms p(95)=711.18ms

http_req_failed

0.00% ✓ 0 ✗ 188

http_req_receiving

avg=115.87µs min=60.86µs med=110.01µs max=508.35µs p(90)=152.09µs p(95)=158.61µs

http_req_sending

avg=125.31µs min=63.72µs med=114.43µs max=825.81µs p(90)=150.33µs p(95)=191.61µs

http_req_tls_handshaking

avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s

http_req_waiting

avg=618.77ms min=421.58ms med=621.32ms max=812.7ms p(90)=691.81ms p(95)=710.93ms

http_reqs

188 3.106853/s

iteration_duration

avg=625.32ms min=427.15ms med=628.41ms max=818.77ms p(90)=698.25ms p(95)=717.76ms

iterations

188 3.106853/s

vus

2 min=1 max=2

vus_max

2 min=2 max=2

Sample volume test results

Metric
Value

checks

100.00% ✓ 954 ✗ 0

data_received

12 MB 40 kB/s

data_sent

20 MB 66 kB/s

dropped_iterations

23345 77.340364/s

http_req_blocked

avg=7.44µs min=2.89µs med=5.34µs max=235.67µs p(90)=7.39µs p(95)=8.49µs

http_req_connecting

avg=1.14µs min=0s med=0s max=180.71µs p(90)=0s p(95)=0s

http_req_duration

avg=2.49s min=478.77ms med=2.5s max=3.22s p(90)=2.7s p(95)=2.79s

http_req_failed

0.00% ✓ 0 ✗ 954

http_req_receiving

avg=105.4µs min=51.79µs med=103.68µs max=473.23µs p(90)=129.93µs p(95)=140.63µs

http_req_sending

avg=130.4µs min=60.02µs med=110.82µs max=2.72ms p(90)=152.04µs p(95)=225.79µs

http_req_tls_handshaking

avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s

http_req_waiting

avg=2.49s min=478.52ms med=2.5s max=3.22s p(90)=2.7s p(95)=2.79s

http_reqs

954 3.160536/s

iteration_duration

avg=2.5s min=483.16ms med=2.5s max=3.23s p(90)=2.7s p(95)=2.79s

iterations

954 3.160536/s

vus

4 min=4 max=8

vus_max

8 min=7 max=8

Development

Adding Packages

  • The Go Cli runs all services from the jembi/platform docker image. When adding new packages or updating existing packages to Platform you will need to build/update your local jembi/platform image. How to build the image.

  • As you add new packages to the platform remember to list them in the config.yml file - otherwise the added package will not be detected by the .

platform-cli tool
OpenHIM Platform example | Jembi PublicPostman
Logo
Please follow this guide here
creating a profile
Instant OpenHIE Architecture