All pages
Powered by GitBook
1 of 3

Loading...

Loading...

Loading...

Environment Variables

Listed in this page are all environment variables needed to run Logstash.

Variable Name
Type
Relevance
Required
Default

LOGSTASH_INSTANCES

Number

Number of service replicas

No

1

LOGSTASH_DEV_MOUNT

Boolean

DEV mount mode enabling flag

No

false

LOGSTASH_PACKAGE_PATH

String

Logstash package absolute path

yes if LOGSTASH_DEV_MOUNT is true

LS_JAVA_OPTS

String

JVM heap size, it should be no less than 4GB and no more than 8GB (maximum of 50-75% of total RAM)

No

-Xmx2g -Xms2g

ES_ELASTIC

String

ElasticSearch Logstash user password

Yes

dev_password_only

ES_HOSTS

String

Elasticsearch connection string

Yes

analytics-datastore-elastic-search:9200

KIBANA_SSL

Boolean

SSL protocol requirement

No

True

LOGSTASH_MEMORY_LIMIT

String

RAM usage limit

No

3G

LOGSTASH_MEMORY_RESERVE

String

Reserved RAM

No

500M

Local Development

Generic Logstash pipeline for ELK stack.

Adding pipelines and configs

To add Logstash config files, you can add files into the <path to project packages>/data-mapper-logstash/pipeline.

Developing the Logstash configs locally

When seeking to make changes to the Logstash configs without having to repeatedly start and stop the service, one can set the LOGSTASH_DEV_MOUNT env var in your .env file to true to attach the service's config files to those on your local machine.

Cluster

When attaching Logstash to an Elasticsearch cluster ensure you use the ES_HOSTS environment variable. eg. ES_HOSTS="analytics-datastore-elastic-search-1:9200","analytics-datastore-elastic-search-2:9200","analytics-datastore-elastic-search-3:9200" and reference it in your logstash configs eg. hosts => [$ES_HOSTS]

Notes

  • With LOGSTASH_DEV_MOUNT=true, you have to set the LOGSTASH_PACKAGE_PATH variable with the absolute path to package containing your Logstash config files, i.e., LOGSTASH_PACKAGE_PATH=/home/user/Documents/Projects/platform/data-mapper-logstash.

  • WARNING: do not edit the pipeline files from within the Logstash container, or the group ID and user ID will change, and subsequently will result in file permission errors on your local file system.

Data Mapper Logstash

Generic Logstash pipeline for ELK stack.

Logstash provides a data transformation pipeline for analytics data. In the platform it is responsible for transforming FHIR messages into a flattened object that can be inserted into Elasticsearch.

Input

Logstash allows for different types of input to read the data: Kafka, HTTP ports, files, etc.

Filters

With a set of filters and plugins, the data can be transformed, filtered, and conditioned.

This allows the creation of a structured and flattened object out of many nested and long resources.

Accessing the different fields will be much easier and we will get rid of the unused data.

Output

To save the data, Logstash provides a set of outputs such as: Elasticsearch, S3, files, etc.