Compliant Kubernetes Service documentation has moved

Please note: You are not reading Kubernetes documentation. If you're looking for Compliant Kubernetes Service documentation, it has moved. Read more here.

Container Services End of Life

Datica has made the decision to discontinue support for the container services feature set on CPaaS. While customers have had success using container services to deploy applications on CPaaS, we have found that the workflows for creating, maintaining, and supporting container services are cumbersome for users and Datica alike. With the proven success of CKS in providing flexibility for deploying custom docker images as a viable alternative, it no longer makes sense for us to support container services on CPaaS.

On to the specifics:

  • Effective immediately, Datica will no longer allow the creation of new container services. No new features or improvements will be added to container services. Issues that are deemed critical to existing container services workflows will be fixed, and all systems that container services rely on will continue to receive patches and updates until the EOL date (January 14, 2020).
  • The end of Datica’s support for container services on CPaaS will be January 14, 2020. All applications running in container services must be converted to code services prior to this date. It is critical to note that any container services still running on that date will cease to function.

We strongly encourage our existing CPaaS customers utilizing our legacy container service feature to consider shifting their workloads onto Datica’s CKS solution. Our team would be happy to provide additional information and work with you on pricing the solution so that it is a viable option for your needs.

The documentation in this article will continue to be available for customers with existing container services.


When creating a Dockerfile to be run on a Datica container service, there are some specific considerations to take into account. Certain aspects of how we manage containers, like logging and monitoring, will require that you follow some non-standard practices when creating your Dockerfile. In the future, we plan to improve this workflow to allow you to start your services however you choose. This article will go over the main steps to follow in order to get you image to run on the Datica platform.


All Docker images running on Datica must inherit from the Datica base image. The base image is a lightweight version of what we use to build our other service types, and comes packaged with all the utilities you need in order to integrate with the Datica platform. It is important to remember that containers (i.e. a running instance of an image) are ephemeral. They are designed to be easily deployable, and, more importantly, replaceable. Any data in your container will be lost when it is redeployed. Container services are not meant for storing data, and they will not be backed up. We will not be attaching volumes to container services.

  1. Pull Datica's base image from our docker registry. Datica's Docker registry is located at All Docker images running on Datica must inherit from our base image.
    • Pulling Datica's base image will be very familiar to anyone that has used Docker before:
      • With the Datica CLI:
        • datica -E "My Env" images pull
      • With the Docker CLI:
        • docker pull
  2. Create your Dockerfile. There are some considerations that should be taken into account when creating a Dockerfile:
    • Take a look at the following example app to see how we set up a container. Code snippets in this document are pulled from this example.
    • Datica's base image must be included as the first line in your Dockerfile:
      • FROM
    • The PORT environment variable will be set on the container service the same way it is set on existing code services. Your application should make use of the PORT variable to ensure that your service will be routed appropriately through the service_proxy.
    • For accessing other services available in the Datica environment, you will need to set environment variables. An example of this would be to set an appropriate database connection string as a DATABASE_URL on your container service. You can set environment variables using Datica's vars command.
    • The Datica base image executes runit using a CMD instruction in its Dockerfile to start your application and handle logging, as described below. DO NOT use the CMD instruction in your Dockerfile, as it will override the command specified in the previous image layer. You do not need to execute your application, simply follow the steps outlined below and runit will start it for you.
    • Here is an example of what Dockerfile looks like:
      • FROM
        MAINTAINER Maintenance Martha <>
        COPY root/ /
        RUN (sh -e / && rm /

      • NOTE: It is important that you use the COPY command in your Dockerfile to move the root directory for your application into the container, since using ADD can cause issues when there are discrepancies between the source and destination filesystems (for instance, if you are building your image on a Mac).


We highly suggest following the pattern that we use in our example healthcheck application. This is to ensure that your application is always logging properly, which you need to make sure you stay in compliance. 

  1. Setup runit to start your application and handle logging. In the future, we plan to pull logging functionality out of containers with an external service, which will allow you to start your services in the Dockerfile, but for now we are using runit to handle this. Here are the basic steps for runit to find and start your application.
    • In your project, create a /build directory to store initialization files, and a subdirectory /root. Anything that you want to be included in your container should go in /root since this will become the root of its filesystem (as denoted by the “COPY /root /” command in the example Dockerfile above). The file /build/root/ will be referenced as / in your running container.
      • /build
            |-- /root
                   |-- /etc
                   |      |-- /sv
                   |            |-- /healthcheckapp
                   |                       |-- /log
                   |                       |      |-- run
                   |                       |-- run

    • When your container is running, runit will look through subdirectories of /etc/service and attempt to execute any file named “run”. The run script is where you will include the command to start your application, as seen here.

      #! /bin/bash -e
      # this is to log stderr and stdout together
      exec 2>&1
      exec /bin/healthcheckapp

      We recommend that you create your run scripts in the subdirectory /build/root/etc/sv/<myapp>/ within your project and then symlink /etc/sv/* to the /etc/service directory in your running container, as shown in this snippet from

      mkdir -p /etc/service
      ln -s /etc/sv/* /etc/service/

      Note that is executed by RUN in the last line of our Dockerfile, and is triggered each time the container is started.

    • Keep in mind that these run scripts must be executable files. This can be accomplished in a UNIX terminal with the command: chmod +x path/to/run/script
  2. When executing a run script for an application, runit will look for /log/run in the same directory. Here is an example of a log run script that can be copied almost exactly (all you need to do is change the APPNAME variable)
    • #!/bin/sh -e
      exec 2>&1


      if [ ! -e "${LOGDIR}" ]; then
         mkdir -p "${LOGDIR}"
         chmod 700 "${LOGDIR}"
         chown -R ${LOGUSER}:${LOGUSER} "${LOGDIR}"

      # you can add options to svlogd below, cf man svlogd
      exec chpst -u ${LOGUSER} \
          svlogd "${LOGDIR}"

    • Once you have set up logging with runit, all you need to do is print to STDOUT or STDERR, and the logs will be captured


We use sensu for monitoring all of our service types. Use these steps as a guideline to get this working on your custom container.

  1. Create the sensu directory structure as a subdirectory of /root/etc in your project. This is how it looks in our example app:
    • /etc
         |-- /sensu
                |-- /conf.d
                |       |-- custom_check.json
                |-- /plugins
                |       |--

  2. Create your sensu configuration file in the conf.d directory to tell sensu what checks to run, and how often.
    • {
          "checks": {
              "custom_ping": {
                  "command": "/etc/sensu/plugins/",
                  "subscribers": [
                  "interval": 60,
                  "standalone": true

    • Use or create sensu plugins to check if your application is running as expected. Take a look at in our example app to see a simple check that hits a ping endpoint on the local server.

Building and Tagging

We recommend that you create a script in your project to handle both building your application and the docker image. This makes it easier to keep track of all of your build steps, and can be parameterized for different use cases. Note that if your application is written in a compiled language, you must target the linux operating system and amd64 architecture in order for it to run in a docker container, as seen in this example script for building a Golang project.

#!/bin/bash -e
if [ "$#" == "1" ]; then

rm -f root/bin/healthcheckapp
GOOS=linux GOARCH=amd64 go build -o root/bin/healthcheckapp ../main.go

docker build -t healthcheckapp:${TAG} .