Compliant Kubernetes Service documentation has moved

Please note: You are not reading Kubernetes documentation. If you're looking for Compliant Kubernetes Service documentation, it has moved. Read more here.

Now that you understand the basics of using Mirth and how to build your own channels/interfaces, let's jump into all of the resources that Datica provides that can help you get started right out of the box.

Open Source Resources


The first thing that you'll want to familiarize yourself with is our open source repository located here:

Here you'll find Mirth template channels, JavaScript Transforms, example messages and schemas, and more meant to act as a foundation for getting your integrations built fast and without much hassle. Primarily to start out our focus is on HL7 message translation, so you'll notice that most of our resources are for translating to and from HL7-formatted data to the Datica standard JSON format. However, with the open source nature of the effort we will continue expanding on the offered resources and adding additional transforms, new versions, channel templates, as well as popular requests from the community. An overview of HL7 itself is outside the scope of this guide, but feel free to peruse our Academy guides and articles for more information on that and Datica's position within Healthcare IT in general:

The repo's README explains this at a high level, but we'll go over the structure of the repo and how to navigate it in more detail so that you're able to find what you're looking for:

  • Specifications = One of the main ideas behind using Mirth is the ability to translate hard-to-parse data (like bar-delimited and somewhat unruly HL7) into something more absorbable (like JSON or XML). However, before you're able to consume the translated data, you'll need to know what the format will be. The specifications section of the repo contains all of our standard JSON schemas for each HL7 message type, per HL7 version (currently only v2.3 but we will be adding more in the future). Say that you want to be able to consume ADT messages in JSON. Check out specifications/ADT.json here which contains the standard ADT JSON as translated from HL7 ADT v2.3. 
    • One note about these models is that they are exhaustive and cover all possible field mappings available within the standard HL7 specifications for the specified version. We'll mention it below in the Channel Templates section, but our template channels will translate the messages to only include JSON fields that have a value in the received HL7 messages. So in a case where you want to consume "all" of the information sent in an HL7 message, you would need to have your system be capable of reading in all of the elements in the relevant JSON schema. However if there is only a limited subset of fields that you know you care about, you can just pull out those. Note that in the either case, you should have a check in place to see if the key exists before attempting to pull the value as it won't appear in the JSON if there was no matching element sent in the HL7 message.
  • Message Examples = As you build out your integration, it will be useful to have realistic example messages (both HL7 and JSON) on-hand to both test with and provide to your integration partner as an example of what you expect to send or receive depending on the workflow. Here we provide a couple of examples for each message type, and give you both the HL7 message and the JSON translation - which can be useful even as just a reference if you want to better understand how the transforms work. Take a look at this example ADT message, then go check out the associated JSON translation in the repo here:

      PID|1|16744866|64536120|82268156|HANEY^LOGAN|CANTRELL|19310725|M|BOOMER|W|3262 CEDAR AVE^MADISON^CO^98082|98082|8374320530|8398359418|AR|U|AOG|743914|977747853|S250438779547852||U|MADISON^NE|Y|5|US||US^UNITED STATES OF AMERICA||N||
      NK1|1|FLOWERS^LOGAN|CHD|2212 JUNIPER DR^MADISON^CO^93292|8786253069|8506218599|N|20130930||FARMER||||W|M|19590714|U|A5|US|ILO|A|F|N|F|ORT|PATTERSON|US|H|||||50361775|P|B||928535958|
      PV1|1|E|XXPOC^314^0^DATICA HOSPITAL A|L|864865|XXPOC^989^0^DATICA HOSPITAL B|65841330^HOWARD^PATRICK^C^^PA|31299465^ABBOTT^ANNA^W^^NP|89564502^MCLAUGHLIN^BOBBY^Z^^MD|SUR|XXPOC^422^1^DATICA HOSPITAL A|N||0|A8||77529878^MUELLER^VINCENT^S^^PA|U|290107|A23|R|N|G|44|20150307|949.57|2|C|B|||||||||||||||
      OBX|1|NM|730898^MCV^L^787-2^ERYTHROCYTE MEAN CORPUSCULAR VOLUME^LN||87|FL|79-97|||N|F|20160518||20160520|CB^PATHOLOGY LAB X|||
      AL1|1|DA|F912702953^RIVAROXABAN^^FROM XARELTO|MI|HIVES|20151023|
      DG1|1||S82.9^UNSPECIFIED FRACTURE OF LOWER LEG^ICD-10||20161003|F|
      PR1|1||99213^OFFICE VISIT^CPT||20160326||D||
      GT1|1|50361775|FLOWERS^LOGAN^M||2212 JUNIPER DR^MADISON^CO^93292|8786253069|8506218599|19590714|M|||928535958|20150315||1||||467073|P||N|||N||||7379722661|W||||||ILO|A||N|F|ORT|PATTERSON|USA|H|||||FARMER||||P||B|
      IN1|1|K38312663|36779863|DATICA INSURANCE CO|5259 JUNIPER TER^MADISON^PA^98846||8835198967|925570|INSURANCE GROUP Q|287982037|ACME GROUP|20161227||A15401823424|HDHP|HANEY^LOGAN^R|SELF|19310725|3262 CEDAR AVE^MADISON^CO^98082|Y|IN|1|N||N||N||20160518||S|||||K735285220|500|||||T|M|3322 ASH DR^MADISON^AL^99278||V56750706|H||16744866|
      IN2|862025|977747853|ACME GROUP||I||||||||||||||N|N|||||36779863|K735285220|1|||1^500^11|U|A8|US|AR|A|O|Y|F|AOG|CANTRELL|USA|U|U|20150515||PAINTER||T|||||||||||||K735285220|SEL|8374320530|8398359418||N|N|Y|||W|SEL|
      IN3|1|913251489812|UNDEFINED|N|AT|20170110||58454145^MORTON^HANNAH^Q^^RN|20170110||PE^7|||95070069^MOORE^HEATHER^C^^NP|58454145^MORTON^HANNAH^Q^^RN|8145588362||DATICA CERTIFICATION GROUP|8294323070|IPE|45882470^BELTRAN^GEORGE^Q^^PA||||96068283^LYNN^JERRY^N^^RN|
  • Channel Templates = The heart of the OCI resources lies in our Channel Templates, which can be downloaded and imported right into your Mirth instance and get you underway on your integration in minutes. These are fully-functional templates that perform a variety of functions, such as translating an HL7 message to JSON or giving you the framework for building an HTTP Sender. All of the templates are XML files exported from version 3.4.1 of Mirth (so be aware that these cannot be imported into earlier versions of Mirth, but can be imported into newer versions without issue). They are broken down into two sub-directories:
    • HL7 to JSON = Organized by HL7 version, these are channels that contain a Source Transform that will translate an HL7 message type to the corresponding Datica JSON model (see the specifications folder, and/or the description above). Note that when importing them, you will want to select "Yes" when prompted if you want to import the attached code templates as well. This will import some useful (and one of them critical, see below) code templates into your instance alongside the channel itself. These can be managed in the "Edit Code Templates" option in the Channel Tasks pane on the left. All of our HL7 transform channels include the same code templates so once you import them for one channel, you shouldn't have to do so again.
      • Screen_Shot_2017-12-08_at_6.08.06_PM.png
        • If you already added them from a previous import, you'll see a notification saying that the names have conflicts with existing templates. Just click Cancel and you can import the channel without importing the code templates:
          • Screen_Shot_2017-12-08_at_6.08.28_PM.png
        • One final note on the code templates - if you are seeing messages fail to process due to Mirth being unable to find a reference to the "getSegmentsAfter" function, make sure that you have the code templates imported appropriately and that the channel dependencies (set in the Source tab) have the "Datica Mirth OCI Code Template Library" library selected. If it's not, select it then deploy the channel and give it another whirl. If you are still running into issues, please contact Datica support. 
      • These channels are all primarily Channel Reader sources going to a single Channel Writer destination (that doesn't actually send anywhere). You will need to configure the Source and Destination pieces of the channel if you want to use it in a customer-facing workflow. For guidance on that, check out Mirth Channel Overview or the Protocol examples under the protocol_examples folder.
    • Protocol examples = These channels don't have as much substance Transformer-wise but they include some example builds using different listening/sending protocols. For example a TCP listener that has an HTTP Sender destination, or a File Reader that processes through to a Document Writer. These aren't as "out-of-the-box" when it comes to the translation of data, but they provide the framework for injecting your own custom data model or just using as a reference for configuring listeners/senders unique to your product workflow.
    • Utilities = This directory will hold various utility channels that either Datica builds or are provided by the community in order to perform various useful tasks. For example, one utility available in the repo right now is a Mirth channel that can generate valid HL7 message examples with randomized data. See each utility in the directory for a README with information on how to use it.
  • Transforms = Here is where we store all of the individual segment-level transforms that make up the full spectrum of HL7-based Datica data translation. Since different HL7 message types utilize a specific subset of segments but some are used in more than one (for example. the "PID" [patient demographic info] segment can appear in almost every message type), we chose to create a different JS-based transform for each individual segment. Then when putting together our template channels, we pieced the applicable segment transforms together as individual Transform Steps in order to build the full HL7 model. You'll notice that some transforms can contain other segments embedded inside, and this is common with HL7 (which is why the bar-delimited format is hard to parse in and of itself!). The naming convention on the different transforms should tell you what the intended usage should be (ex. "PID for ADT") as different message types can have unique combinations of segments embedded within each other. 
    • Something else you'll notice in the channel templates themselves is that some of the steps refer to each other. This is due to how HL7 messages can contain multiple data sets within them with repeating fields that are only relevant to one data set at a time. For example, and ORU (results) message can contain multiple results for different orders, so each of the result data sets must be under the relevant order (ORC) segment like so:
      • Screen_Shot_2017-12-08_at_6.09.03_PM.png
      • It can get complicated fast, but that's why we've pre-built these templates for you to use, so all you have to do is build the functionality on your end to absorb the output and we handle the data translation itself!
  • Code Templates = As mentioned above, some of the Channel Templates have code templates included in their exports so that they can be pulled in when they're imported into a separate Mirth instance. This repository is where we store some useful code templates that can be downloaded and imported individually separate from the ones that come with the channels themselves. We'll continue to add to this directory as we build more and more useful code templates, but for now it contains just a few that we'll cover in the next sub-section.

Included Code Templates

Here are the descriptions for the Code Templates currently included in the Datica OCI repository. These can be downloaded and imported into your Mirth instance but going to the Channels view, then the "Edit Code Templates" page. From there you can either create a new library (libraries can be useful to split out as you can assign them to be used only by certain channels) and import a code template into there, or you can import it into an existing library. Just remember that if any channels are set to use code templates in a library that you modify or add new templates to, then you need to re-deploy the channel before it's able to utilize your changes. Mirth will give you a hint as to whether a channel has un-deployed code template changes coming by highlighting the '0' in the "Revisions" ("Rev delta") column on the channel page like Screen_Shot_2017-12-08_at_6.09.36_PM.png

The templates currently included in the repo are as follows:

  • getSegmentsAfter = This code template, initially created by one of the Mirthconnect developers themselves, is an extremely useful function that allows you to grab segments coming after a specified segment. This is used extensively in our HL7 to JSON transforms as it allows us to pull in embedded segments, such as when a NTE (note) segment comes after an OBX (observation) segment - meaning that the note is relevant to that observation. Since our transforms rely heavily on this functionality, this code template is required to be installed if you want to use our channel templates. When you import a channel template, it should prompt you to install the templates along with it, so that should make it easy to get in place. It works by calling 

    getSegmentsAfter(root, startSeg, segName, consecutiveInd, stopSegNames)

     where the fields are as follows:


    • root (required): The root HL7Message node of the message, or the parent of the segment node.
    • startSeg (required): The segment AFTER which to start collecting segments.
    • segName (required): The name (String or RegExp) of the segments you want to collect.
    • consecutiveInd (optional): If true, indicates that the segments are expected to come directly after startSeg.  If false, segments are collected until another segment with the same name as startSeg is encountered. Defaults to false.
    • stopSegNames (optional): An array of segment names that, when encountered, stop the collection of segments.

    An example usage might look like "getSegmentsAfter(msg,msg['PID'][0],'PD1',true);" for a simple call, or something like "getSegmentsAfter(msg,msg['RXE'][0],'RXR',false, ['RXE', 'RXA']);" for a more complicated one.

  • toISODate = This code template will take an HL7-formatted date (typically formatted like yyyyMMddHHmmss) and turn it into ISO format (ex. 'yyyy-MM-ddTHH:mm:ssZ' for easier parsing on your end. There are only two parameters, the HL7 date string (which is required) and an optional timezone string corresponding to a java.util.TimeZone values (ex: "US/Central"). The latter is useful if you want to maintain a timezone offset that may be assumed within any time/date fields in the HL7 (offsets aren't usually sent, so typically datetimes are sent with the local-to-the-organization assumed). If a timezone is not specified in the function call, it will assume UTC and tack a 'Z' onto the end of the ISO date. An example call looks like: toISODate(msg['MSH']['MSH.7']['MSH.7.1'].toString(), "US/Eastern"). Be aware that if an empty string is passed to the function, it may default to returning the standard epoch date of '1970-01-01T00:00:00Z', so you may want to add a null-check of sorts before calling it. For more on dates in Mirth in general, check out this Confluence page written by Mirthconnect themselves:
  • toHL7Date = Basically the opposite of the above, this function turns ISO-formatted date strings (ex: "yyyy-MM-ddTHH:mm:ssZ") into HL7-formatted date strings (ex. "yyyyMMddHHmmss"). An example call looks like: toHL7Date(msh.dateTime, "US/Pacific"). For more on dates in Mirth in general, check out this Confluence page written by Mirthconnect themselves: