Skip to main content

Continuous Delivery of Grouper using Jenkins and Docker

Unicon recently completed a project for the Colorado School of Mines to setup a continuous integration/continuous delivery (CI/CD) pipeline that can be used as a template for their work modernizing their application deployment processes going forward. Because of the School of Mines participation in the TIER Campus Success Program, Internet2 Grouper and Evolveum midPoint were chosen as the target deployments of this work.

The big picture diagram for the pipeline is:

Continuous Delivery of Grouper using Jenkins and Docker

Jenkins was deployed and configured to:

  1. Work with GitLab to build a custom Docker image when changes are committed to the Git repo
  2. Manage the publication of Docker image to a central Docker image registry
  3. Update the containers running in a Docker Swarm with the new image

The process works very well, so School of Mines has permitted Unicon to share it with the community at large. The pipeline script that makes this work is:

​pipeline {
    agent {
        label 'docker'
    }
    stages {
        stage('Build') {
            steps {
                checkout([
                    $class: 'GitSCM', branches: [[name: '*/master']],
                    userRemoteConfigs: [[url: 'gitlab@git.example.edu:devops/grouper.git',credentialsId:'gitlab-ssh-key']]
                ])
                sh 'docker image build --no-cache --pull --tag example/grouper:latest .'
            }
        }
        stage('Publish') {
            steps {
                sh 'docker image tag example/grouper:latest registry.example.edu/example/grouper:latest'
                sh "docker image tag example/grouper:latest registry.example.edu/example/grouper:${env.BUILD_NUMBER}"
                sh 'docker image push registry.example.edu/example/grouper:latest'
                sh "docker image push registry.example.edu/example/grouper:${env.BUILD_NUMBER}"
            }
        }
        stage('Run') {
            environment {
                DOCKER_TLS_VERIFY = 1
                DOCKER_HOST = 'tcp://iamswarm.example.edu:2376'
            }
            steps {
                parallel (
                    "daemon" : {
                        withCredentials([dockerCert(credentialsId: 'iam-swarm',
                                variable: 'DOCKER_CERT_PATH')]) {
                            sh "docker service update --with-registry-auth --image registry.example.edu/example/grouper:${env.BUILD_NUMBER} grouper_daemon"
                        }
                    },
                    "ui" : {
                        withCredentials([dockerCert(credentialsId: 'iam-swarm',
                                variable: 'DOCKER_CERT_PATH')]) {
                            sh "docker service update --with-registry-auth --image registry.example.edu/example/grouper:${env.BUILD_NUMBER} grouper_ui"
                        }
                    },
                    "ws" : {
                        withCredentials([dockerCert(credentialsId: 'iam-swarm',
                                variable: 'DOCKER_CERT_PATH')]) {
                            sh "docker service update --with-registry-auth --image registry.example.edu/example/grouper:${env.BUILD_NUMBER} grouper_ws"
                        }
                    }
                )
            }
        }
    }
}

Here's a quick breakdown of what is happening in this script:

  1. As indicated by the agent's docker label, this pipeline will only execute on a Jenkins worker node that has been labeled as supporting Docker. These Jenkins worker nodes do not need anything special other than having a fairly current version of the Docker engine and client installed, and having the Jenkin's ssh account used to docker login to the private image registry. The nodes do not need to be part of a Docker Swarm or a Kubernetes cluster.
  2. The Build step checks out the image source code and builds the resulting Docker image. It also checks to see if there are updates to the base image specified in the Dockerfile source.
  3. The Publish step tags the newly produce image with a fully qualified image name that includes tags based on the current Jenkins job's BUILD_NUMBER and "latest". The image is pushed to the registry using both name/tag sets.
  4. The Run step does a few different things:
    1. It sets up a few different environment variables, namely pointing to the IP address of the Swarm Manager that has been configured to handle remote Docker API requests (being authenticated with Mutual TLS).
    2. It executes 3 updates in parallel. The Docker services for Grouper Daemon, Grouper UI, and Grouper Web Services are each updated to use the new image that corresponds with the job's BUILD_NUMBER. For each update, Jenkins makes the client TLS key and certificate available to the docker service update process, and then removes them from the Jenkins worker node after the process completes.

That's it. Pretty straight-forward.

If you compare these steps with the diagram, you'll notice that the red testing steps were not mentioned. Those tests can be quite involved and were out of scope for this project. Perhaps that work will be picked up as part of a future project by a client and then shared with the community like this work was. Reach out to Unicon if that sounds like something that your organization might be interested in. Also, reach out if you'd like Unicon to assist you with setting up any part of your CI/CD pipeline.

John Gasper

John Gasper

Software Architect
John Gasper is a wonderful mix of Identity and Access Management (IAM) consultant and DevOps implementer. By day, he is implementing, configuring, or advising on one of the many open source IAM applications including CAS Server, Shibboleth, Grouper, SimpleSAMLphp, 389 Directory Server, and occasionally on a closed source applications like Microsoft Active Directory and Active Directory Federation Services. By night, John tries to automate the world of IT using tools, such as Docker, Jenkins, and Kubernetes. He has experience with cloud providers including Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure. Before joining Unicon in 2013, he worked in IT at Eastern Washington University covering multiple facets of IT including Banner development and administration, Active Directory administration, and pretty much everything in between. They even let him write code for Cisco IP Phone applications