arrow-left

All pages
gitbookPowered by GitBook
1 of 4

Loading...

Loading...

Loading...

Loading...

Elasticsearch

Elasticsearch Backup & Restore.

hashtag
Elasticsearch Backups

For detailed steps about creating backups see: Snapshot filesystem repository docsarrow-up-right.

Elasticsearch offers the functionality to save a backup in different ways, for further understanding, you can use this link: Register a snapshot repository docsarrow-up-right.

hashtag
Elasticsearch Restore

To see how to restore snapshots in Elasticsearch: .

Snapshot Restore docsarrow-up-right

OpenHIM Data

OpenHIM backup & restore

OpenHIM transaction logs and other data is stored in the Mongo database. Restoring this data means restoring all the history of transactions which mandatory to recover in case something unexpected happened and we lost all the data.

In the following sections, we will cover:

  • Already implemented jobs to create backups periodically

  • How to restore the backups

hashtag
Backup & Restore

hashtag
Single node

The following job may be used to set up a backup job for a single node Mongo:

hashtag
Cluster

The following job may be used to set up a backup job for clustered Mongo:

hashtag
Restore

In order to restore from a backup you would need to launch a Mongo container with access to the backup file and the mongo_backup network by running the following command:

docker run -d --network=mongo_backup --mount type=bind,source=/backups,target=/backups mongo:4.2

Then exec into the container and run mongorestore:

mongorestore --uri="mongodb://mongo-1:27017,mongo-2:27017,mongo-3:27017/openhim?replicaSet=mongo-set" --gzip --archive=/backups/<NAME_OF_BACKUP_FILE>

The data should be restored.

Single node restore docsarrow-up-right
Cluster restore docsarrow-up-right
[job-run "mongo-backup"]
schedule= @every 24h
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= true
[job-run "mongo-backup"]
schedule= @every 24h
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= true

Disaster Recovery Process

Backup & restore process.

Two major procedures should exist in order to recover lost data:

  • Creating backups continuously

  • Restoring the backups

This includes the different databases: MongoDB, PostgreSQL DB and Elasticsearch.

The current implementation will create continuous backups for MongoDB (to backup all the transactions of OpenHIM) and PostgreSQL (to backup the HAPI FHIR data) as follows:

  • Daily backups (for 7 days rotation)

  • Weekly backups (for 4 weeks rotation)

  • Monthly backups (for 3 months rotation)

More details on each service backup & restore pages.

HAPI FHIR Data

FHIR messages Backup & Restore.

Validated messages from HAPI FHIR will be stored in PostgreSQL database.

The following content will detail the backup and restore process of this data.

hashtag
Backups

This section assumes Postgres backups are made using pg_basebackup

hashtag
Postgres (Hapi-FHIR)

To start up HAPI FHIR and ensure that the backups can be made, ensure that you have created the HAPI FHIR bind mount directory (eg./backup)

hashtag
Disaster Recovery

NB! DO NOT UNTAR OR EDIT THE FILE PERMISSIONS OF THE POSTGRES BACKUP FILE

hashtag
Postgres (HAPI FHIR)

Preliminary steps:

  1. Do a destroy of fhir-datastore-hapi-fhir using the CLI binary (./platform-linux for linux)

  2. Make sure the Postgres volumes on nodes other than the swarm leader have been removed as well! You will need to ssh into each server and manually remove them.

  3. Do an

After running the preliminary steps, run the following commands on the node hosting the Postgres leader:

NOTE: The value of the REPMGR_PRIMARY_HOST variable in your .env file indicates the Postgres leader

  1. Retrieve the Postgres leader's container-ID using: docker ps -a. Hereafter called postgres_leader_container_id

  2. Run the following command: docker exec -t <postgres_leader_container_id> pg_ctl stop -D /bitnami/postgresql/data

Postgres should now be recovered

Note: After performing the data recovery, it is possible to get an error from HAPI FHIR (500 internal server error) while the data is still being replicated across the cluster. Wait a minute and try again.

init
of
fhir-datastore-hapi-fhir
using the CLI binary
Wait for the Postgres leader container to die and start up again. You can monitor this using: docker ps -a
  • Run the following command: docker rm <postgres_leader_container_id>

  • Retrieve the new Postgres leader's container-ID using docker ps -a, be weary to not use the old postgres_leader_container_id

  • Retrieve the Postgres backup file's name as an absolute path (/backups/postgresql_xxx). Hereafter called backup_file

  • Run the following commands in the order listed :

  • Do a down of fhir-datastore-hapi-fhir using the CLI binary Example: ./instant-linux package down -n=fhir-datastore-hapi-fhir --env-file=.env.*

  • Wait for the down operation to complete

  • Do an init of fhir-datastore-hapi-fhir using the CLI binary Example: ./instant-linux package init -n=fhir-datastore-hapi-fhir --env-file=.env.*

  • # Stop the server running in the container
    docker exec -t <postgres_leader_container_id> pg_ctl stop -D /bitnami/postgresql/data
    
    # Clear the contents of /bitnami/postgresql/data
    docker exec -t --user root <postgres_leader_container_id> sh -c 'cd /bitnami/postgresql/data && rm -rf $(ls)'
    
    # Copy over the base.tar file
    sudo docker cp <backup_file>/base.tar <postgres_leader_container_id>:/bitnami/postgresql
    
    # Extract the base.tar file
    docker exec -t --user root <postgres_leader_container_id> sh -c 'tar -xf /bitnami/postgresql/base.tar --directory=/bitnami/postgresql/data'
    
    # Copy over the pg_wal.tar file
    sudo docker cp <backup_file>/pg_wal.tar <postgres_leader_container_id>:/bitnami/postgresql
    
    # Extract pg_wal.tar
    docker exec -t --user root <postgres_leader_container_id> sh -c 'tar -xf /bitnami/postgresql/pg_wal.tar --directory=/bitnami/postgresql/data/pg_wal'
    
    # Copy conf dir over
    docker exec -t --user root <postgres_leader_container_id> sh -c 'cp -r /bitnami/postgresql/conf/. /bitnami/postgresql/data'
    
    # Set pg_wal.tar permissions
    docker exec -t --user root <postgres_leader_container_id> sh -c 'cd /bitnami/postgresql/data/pg_wal && chown -v 1001 $(ls)'
    
    # Start the server
    docker exec -t <postgres_leader_container_id> pg_ctl start -D /bitnami/postgresql/data