Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Clickhouse is a SQL datastore.
Listed in this page are all environment variables needed to run Clickhouse.
server=santempi-psql-1;port=5432; database=santedb; user id=santedb; password=SanteDB123; pooling=true; MinPoolSize=5; MaxPoolSize=15; Timeout=60;server=santempi-psql-1;port=5432; database=auditdb; user id=santedb; password=SanteDB123; pooling=true; MinPoolSize=5; MaxPoolSize=15; Timeout=60;JS_REPORT_DEV_MOUNT=true, otherwise, the dev mount will persist the Jsreport scripts/templates across your branches.Listed in this page are all environment variables needed to run Jsreport.
Kibana is a visualisation tool forming part of the Elastic (ELK) Stack for creating dashboards by querying data from ElasticSearch.
Listed in this page are all environment variables needed to run Kibana.
A kafka processor to unbundle resources into their own kafka topics.
A helper package for the Kafka message bus.
Listed in this page are all environment variables needed to run Hapi-proxy.

docker swarm initwget -qO config.yaml https://github.com/jembi/platform/releases/latest/download/config.yamlwget -qO .env.local https://github.com/jembi/platform/releases/latest/download/.env.localinstant package init --name interoperability-layer-openhim --name message-bus-kafka --env-file .env.local --devinstant package destroy --name interoperability-layer-openhim --name message-bus-kafka --env-file .env.local --devwget https://github.com/jembi/platform/releases/latest/download/cdr-dw.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p cdr-dw --devPre-defined recipes for common use cases
The OpenHIM Platform includes a number of base packages which are useful for supporting Health Information Exchanges Workflows. Each section below describes the details of these packages.
wget https://github.com/jembi/platform/releases/latest/download/cdr-dw.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p cdr-dw --devwget https://github.com/jembi/platform/releases/latest/download/mpi.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p mpi --devA Kafka consumer that maps FHIR resources to a flattened data structure.
Listed in this page are all environment variables needed to run Monitoring package.
prometheus-job-service kafka-minion:
image: quay.io/cloudhut/kminion:master
hostname: kafka-minion
environment:
KAFKA_BROKERS: kafka:9092
deploy:
labels:
- prometheus-job-service=kafka
- prometheus-address=kafka-minion:8080Listed in this page are all environment variables needed to run Ofelia.
[job-run "mongo-backup"]
schedule= @daily
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= truewget https://github.com/jembi/platform/releases/latest/download/cdr.env && \
wget https://github.com/jembi/platform/releases/latest/download/config.yaml && \
instant package init -p cdr --devListed in this page are all environment variables needed to run the interoperability layer Openhim.
[job-run "renew-certs"]
schedule = @every 1440h ;60 days
image = jembi/swarm-nginx-renewal:v1.0.0
volume = renew-certbot-conf:/instant
volume = /var/run/docker.sock:/var/run/docker.sock:ro
environment = RENEWAL_EMAIL=${RENEWAL_EMAIL}
environment = STAGING=${STAGING}
environment = DOMAIN_NAME=${DOMAIN_NAME}
environment = SUBDOMAINS=${SUBDOMAINS}
environment = REVERSE_PROXY_STACK_NAME=${REVERSE_PROXY_STACK_NAME}
delete = trueListed in this page are all environment variables needed to run Kafka mapper consumer.
A job scheduling tool.
Elasticsearch is the datastore for the Elastic (ELK) Stack.
Listed in this page are all environment variables needed to run and initialize Elasticsearch.
The interoperability layer that enables simpler data exchange between the different systems. It is also the security layer for the other systems.
Listed in this page are all environment variables needed to run Superset.
A patient matching and deduplicater for the platform
Listed in this page are all environment variables needed to run Logstash.
Export superset assets . You can save the response into a file called superset-export.zip under the folder specified above.Listed in this page are all environment variables needed to run the Message Bus Kafka.
Listed in this page are all environment variables needed to run hapi-fhir package.
curl http://127.0.0.1:3447/fhir/PatientListed in this page are all environment variables needed to run Reverse Proxy Nginx.
This page gives a list of common command and examples for easy reference
sudo curl -L https://github.com/openhie/instant-v2/releases/latest/download/instant-linux -o /usr/local/bin/instantinstant package init -n <package_name>instant package down -n <package_name>instant package up -n <package_name>instant package destroy -n <package_name>instant package init -p <profile_name>instant package down -p <profile_name>instant package up -p <profile_nameage_name>instant package destroy -p <profile_name>instant package init ... --devA kafka processor to unbundle resources into their own kafka topics.
docker exec -u root -it superset_postgres-metastore-1.container-id-here bashpg_dump superset -c -U admin > superset_backup.sqldocker cp superset_postgres-metastore-1.container-id-here:/superset_backup.sql /path/to/save/to/superset_backup.sqldocker cp /path/to/save/to/superset_backup.sql superset_postgres-metastore-1.container-id-here:/superset_backup.sql docker exec -it superset_postgres-metastore-1.container-id-here bashcat superset_backup.sql | psql -U admin -d supersetVarious notes and guide
Backup & restore process.
"environmentVariables":
{
# Other Configurations
...
"SUPERSET_TRAEFIK_SUBDOMAIN": "superset"
}"environmentVariables":
{
# Other Configurations
...
"REACT_APP_JEMPI_BASE_API_HOST": "jempi-api.domain",
"REACT_APP_JEMPI_BASE_API_PORT": "443",
"JEMPI_API_TRAEFIK_SUBDOMAIN": "jempi-api",
"JEMPI_WEB_TRAEFIK_HOST_NAME": "jempi-web",
}"environmentVariables":
{
# Other Configurations
...
"SANTEDB_WWW_TRAEFIK_SUBDOMAIN": "santewww",
"SANTEDB_MPI_TRAEFIK_SUBDOMAIN": "santempi"
}
"environmentVariables":
{
# Other Configurations
...
"KIBANA_TRAEFIK_SUBDOMAIN": "kibana"
}
"environmentVariables":
{
# Other Configurations
...
"MINIO_BROWSER_REDIRECT_URL": "https://domain/minio-console/"
}# API Configuration
- traefik.http.services.minio.loadbalancer.server.port=9090
# Console Configuration
- traefik.http.services.minio-console.loadbalancer.server.port=9001
"environmentVariables":
{
# Other Configurations
...
"KC_GRAFANA_ROOT_URL": "%(protocol)s://%(domain)s/grafana/",
"GF_SERVER_DOMAIN": "domain",
"GF_SERVER_SERVE_FROM_SUB_PATH": "true",
}
"environmentVariables":
{
# Other Configurations
...
"JS_REPORT_PATH_PREFIX": "/jsreport"
}"environmentVariables":
{
# Other Configurations
...
"OPENHIM_SUBDOMAIN": "domain",
"OPENHIM_CONSOLE_BASE_URL": "http://domain"
"OPENHIM_CORE_MEDIATOR_HOSTNAME": "domain/openhimcomms",
"OPENHIM_MEDIATOR_API_PORT": "443"
}ansible-galaxy collection install community.dockerssh-keyscan -H <host> >> ~/.ssh/known_hostsansible-playbook \
--ask-vault-pass \
--become \
--inventory=inventories/<INVENTORY> \
--user=ubuntu \
playbooks/<PLAYBOOK>.ymlansible-playbook \
--ask-vault-pass \
--become \
--inventory=inventories/development \
--user=ubuntu \
playbooks/provision.ymlecho -n '<YOUR SECRET>' | ansible-vault encrypt_stringusername: root@openhim.org
password: instant101initfhir-datastore-hapi-fhirdocker ps -a# Stop the server running in the container
docker exec -t <postgres_leader_container_id> pg_ctl stop -D /bitnami/postgresql/data
# Clear the contents of /bitnami/postgresql/data
docker exec -t --user root <postgres_leader_container_id> sh -c 'cd /bitnami/postgresql/data && rm -rf $(ls)'
# Copy over the base.tar file
sudo docker cp <backup_file>/base.tar <postgres_leader_container_id>:/bitnami/postgresql
# Extract the base.tar file
docker exec -t --user root <postgres_leader_container_id> sh -c 'tar -xf /bitnami/postgresql/base.tar --directory=/bitnami/postgresql/data'
# Copy over the pg_wal.tar file
sudo docker cp <backup_file>/pg_wal.tar <postgres_leader_container_id>:/bitnami/postgresql
# Extract pg_wal.tar
docker exec -t --user root <postgres_leader_container_id> sh -c 'tar -xf /bitnami/postgresql/pg_wal.tar --directory=/bitnami/postgresql/data/pg_wal'
# Copy conf dir over
docker exec -t --user root <postgres_leader_container_id> sh -c 'cp -r /bitnami/postgresql/conf/. /bitnami/postgresql/data'
# Set pg_wal.tar permissions
docker exec -t --user root <postgres_leader_container_id> sh -c 'cd /bitnami/postgresql/data/pg_wal && chown -v 1001 $(ls)'
# Start the server
docker exec -t <postgres_leader_container_id> pg_ctl start -D /bitnami/postgresql/dataDOCKER_HOST=ssh://ubuntu@<ip> instant package init ...Reverse proxy for secure and insecure nginx configurations.
[job-run "mongo-backup"]
schedule= @every 24h
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= true[job-run "mongo-backup"]
schedule= @every 24h
image= mongo:4.2
network= mongo_backup
volume= /backups:/tmp/backups
command= sh -c 'mongodump --uri=${OPENHIM_MONGO_URL} --gzip --archive=/tmp/backups/mongodump_$(date +%s).gz'
delete= trueterraform initterraform applyApply complete! Resources: 5 added, 0 changed, 0 destroyed.
Outputs:
SUBNET_ID = "subnet-0004b0dacb5862d59"
VPC_ID = "vpc-067ab69f374ac9f47"terraform initPUBLIC_KEY_PATH - path to the user's public key file that gets injected into the servers created
PROJECT_NAME - unique project name that is used to identify each VPC and its resources
HOSTED_ZONE_ID - (only if you are creating domains, which by default you are) the hosted zone to use, this must be created in the AWS console
DOMAIN_NAME - the base domain name to use
SUBNET_ID - the subnet id to use, copy this from the previous step
VPC_ID - the subnet id to use, copy this from the previous stepPUBLIC_KEY_PATH = "/home/{user}/.ssh/id_rsa.pub"
PROJECT_NAME = "jembi_platform_dev_{user}"
HOSTED_ZONE_ID = "Z00782582NSP6D0VHBCMI"
DOMAIN_NAME = "{user}.jembi.cloud"
SUBNET_ID = "subnet-0004b0dacb5862d59"
VPC_ID = "vpc-067ab69f374ac9f47"cat ~/.aws/credentials[default]
aws_access_key_id = AKIA6FOPGN5TYHXXXXX
aws_secret_access_key = Qf7E+qcXXXXXXQh4XznN4MM8qR/VP/SXgXXXXX
[jembi-sandbox]
aws_access_key_id = AKIASOHFAV527JCXXXXX
aws_secret_access_key = YXFu3XxXXXXXTeNXdUtIg0gb9Ro7gJ89XXXXX
[jembi-icap]
aws_access_key_id = AKIAVFN7GJJFS6LXXXXX
aws_secret_access_key = b2I6jhwXXXXX4YehBCx/7rKl1JZjYdbtXXXXXterraform apply -var-file my.tfvarsApply complete! Resources: 13 added, 0 changed, 0 destroyed.
Outputs:
domains = {
"domain_name" = "{user}.jembi.cloud"
"node_domain_names" = [
"node-0.{user}.jembi.cloud",
"node-1.{user}.jembi.cloud",
"node-2.{user}.jembi.cloud",
]
"subdomain" = [
"*.{user}.jembi.cloud",
]
}
public_ips = [
"13.245.143.121",
"13.246.39.101",
"13.246.39.92",
]terraform destroy -var-file my.tfvars