Skip to content

Deploy Mirth Connect on MedStack Control

Overview

NextGen Connect, commonly known as Mirth Connect, is an open-source message integration engine focused on healthcare. It can handle HL7 electronic health/medical records (EHR/EMR) managed by healthcare enterprises. While there are many API-based interoperability solutions like Particle Health (which we demonstrated on the Docker Build LIVE show), Mirth Connect is a system that is still widely used by healthcare enterprises and institutions.

There are two (2) concepts to effectively deploying Mirth Connect:

  1. Deploying Mirth Connect and PostgreSQL
  2. Proxying client requests through a webserver

In this tutorial, you'll learn about the applied concepts when deploying Mirth Connect on MedStack Control. Mirth Connect requires both the Mirth Connect and database services. Herein we'll advise on deploying a PostgreSQL database as the database service.

1 – Deploying Mirth Connect and PostgreSQL

Let's start by reviewing the documentation for deploying the container image hosted on Docker Hub at nextgenhealthcare/connect (3 min read).

Go to Docker Hub →

To deploy this service on MedStack Control, you'll need to create two (2) services. We recommend using the MedStack Control API for creating these services. If you're new to using the MedStack Control API, you can follow our API quickstart guide on Authentication (1 min read) to generate a personal access token for authenticating requests to the MedStack Control API.

Go to MedStack Control API Reference →

Inferring from the docker-compose file in the NextGen Healthcare guide for Connect, here are the high-level steps one must do to deploy Mirth Connect on MedStack Control.

Deploy postgres

To deploy the postgres service, you'll need to perform the following steps:

  1. Create a label (and potentially a worker node) for pinning the database to a node
  2. Create a volume for making database data persistent
  3. Create the postgres service

Create a label

You'll need to create a label on a node so the postgres service deploys containers only on a specific node. This can be achieved using node labels and service placement constraints.

The reason why we will pin the postgres container to a specific node is to ensure the postgres data written to the encrypted disk is accessible to the postgres container. This is how volumes work on Docker, which does not currently support persistent volume claims across the network.

Create a node → We recommend creating a worker node for this if one does not yet exist.

Add a label → Set the label to "data = mirthdb"

Create a volume

Creating the volume creates a persistent local volume that outlives the lifecycle of the container on a node. You will create a volume for storing the postgres data on the node.

API Documentation

POST https://api.medstack.co/v1/companies/:company_id/clusters/:cluster_id/volumes

body:

{
  "name" : "postgresql-data"
}

Create a service

Creating the service deploys the postgres container image into a container that runs on MedStack Control.

API Documentation

POST https://api.medstack.co/v1/companies/:company_id/clusters/:cluster_id/services

body:

{
  "name" : "postgres",
  "image" : "postgres",
  "replicas" : 1,
  "mounts" : [
    {
      "target" : "/var/lib/postgresql/data",
      "source" : "postgresql-data",
      "readonly" : false
    }
  ],
  "environment_variables" : [
    {
      "key" : "POSTGRES_USER",
      "value" : "mirthdb"
    },
    {
      "key" : "POSTGRES_PASSWORD",
      "value" : "mirthdb"
    },
    {
      "key" : "POSTGRES_DB",
      "value" : "mirthdb"
    }
  ],
  "placement_constraints" : [
    "node.labels.data==mirthdb"
  ]
}

Deploy connect

To deploy the Mirth Connect service, you'll need to create a connect service that depends on the postgres database.

API Documentation

POST https://api.medstack.co/v1/companies/:company_id/clusters/:cluster_id/services

body:

{
  "name" : "mirthconnect",
  "image" : "nextgenhealthcare/connect",
  "replicas" : 1,
  "environment_variables":[
    {
      "key" : "DATABASE",
      "value" : "postgres"
    },
    {
      "key" : "DATABASE_URL",
      "value" : "jdbc:postgresql://postgres:5432/mirthdb"
    },
    {
      "key" : "DATABASE_MAX_CONNECTIONS",
      "value" : "20"
    },
    {
      "key" : "DATABASE_USERNAME",
      "value" : "mirthdb"
    },
    {
      "key" : "DATABASE_PASSWORD",
      "value" : "mirthdb"
    },
    {
      "key" : "DATABASE_MAX_RETRY",
      "value" : "2"
    },
    {
      "key" : "DATABASE_RETRY_WAIT",
      "value" : "10000"
    },
    {
      "key" : "KEYSTORE_STOREPASS",
      "value" : "docker_storepass"
    },
    {
      "key" : "KEYSTORE_KEYPASS",
      "value" : "docker_keypass"
    },
    {
      "key" : "VMOPTIONS",
      "value" : "-Xmx512m"
    }
  ]
}

Recommendations

In this tutorial, database, user, and keystore credentials are stored in environment variables. A better and more secure way of handling sensitive data like this is through the use of Docker Secrets.

  • In the NextGen Connect guide on Docker Hub, please reference the section on Using Docker Secrets.
  • If you need to persist data located in /appdata beyond the lifecycle of the container, you may also wish to consider Using Volumes. However, please keep in mind that the MedStack Control backup system captures hourly snapshots of all Docker volumes, so this data will be backed up routinely.

2 – Proxying client requests through a webserver

Mirth Connect requires that client requests arrive encrypted using Mirth Connect's JKS certificate. However, the MedStack Control load balancer manages SSL certificates for apps and decrypts data upon ingress before passing it along to the appropriate Docker service.

While internal network traffic in Docker is encrypted with an overlay network, the containers within may communicate unencrypted. This poses a challenge for Mirth Connect, but it can be solved by introducing a webserver in the cluster as a proxy.

In the Mirth Connect user guide, page 411 has a guide on changing the server certificate. Mirth Connect needs to manage its own JKS certificate, so in the chain of communication, the ingress payload would need to be encrypted again before being passed to Mirth Connect; this is where the webserver comes into play.

Overview of steps

  1. Deploy a web server on MedStack Control, like nginx.
  2. Add domain mapping details to this service required to interface with the client, including a domain name and the internal port at which the load balancer passes traffic to the webserver.
  3. Configure the webserver to encrypt the unencrypted payload from load balancer (which originated encrypted from the client) using Mirth Connect's custom JKS certificate.
  4. Forward the JKS signed and encrypted request to the mirthconnect service created in section 1 above.
  5. Egress data from mirthconnect directly to the client. Per the networking rules of MedStack Control clusters, data can egress over any protocol.

Conceptual design

The conceptual design of how traffic flows from the client through to Mirth Connect is as follows.

Client (encrypted payload ingress) > Load balancer (decrypted payload) > Nginx (encrypted payload) > Mirth Connect

Mirth Connect then can respond directly to the client.

Mirth Connect (encrypted payload egress) > Client