Devops

Devops

Devops

Jan 31, 2018

How to Implement Containers to Streamline Your DevOps Workflow

How to Implement Containers to Streamline Your DevOps Workflow

How to Implement Containers to Streamline Your DevOps Workflow

What are Docker Containers?

Docker containers are a form of "lightweight" virtualization They allow a

process or process group to run in an environment with its own file system,

somewhat like chroot jails , and also with its own process table, users and

groups and, optionally, virtual network and resource limits. For most purposes,

the processes in a container think they have an entire OS to themselves and do

not have access to anything outside the container (unless explicitly granted).

This lets you precisely control the environment in which your processes run,

allows multiple processes on the same (virtual) machine that have completely

different (even conflicting) requirements, and significantly increases isolation

and container security.

In addition to containers, Docker makes it easy to build and distribute images

that wrap up an application with its complete runtime environment.

For more information, see

What are containers and why do you need them?

and

What Do Containers Have to Do with DevOps, Anyway?.

Containers vs Virtual Machines (VMs)

The difference between the "lightweight" virtualization of containers and

"heavyweight" virtualization of VMs boils down to that, for the former, the

virtualization happens at the kernel level while for the latter it happens at

the hypervisor level. In other words, all the containers on a machine share the

same kernel, and code in the kernel isolates the containers from each other

whereas each VM acts like separate hardware and has its own kernel.

Docker Carrying Haskell.jpg

Containers are much less resource intensive than VMs because they do not need

to be allocated exclusive memory and file system space or have the overhead of

running an entire operating system. This makes it possible to run many more

containers on a machine than you would VMs. Containers start nearly as fast as

regular processes (you don't have to wait for the OS to boot), and parts of the

host's file system can be easily "mounted" into the container's file system

without any additional overhead of network file system protocols.

On the other hand, isolation is less guaranteed. If not careful, you can

oversubscribe a machine by running containers that need more resources than the

machine has available (this can be mitigated by setting appropriate resource

limits on containers). While containers security is an improvement over normal

processes, the shared kernel means the attack surface is greater and there is

more risk of leakage between containers than there is between VMs.

For more information, see Docker containers vs. virtual machines: What's the difference? and DevOps Best Practices: Immutability

How Docker Containers Enhance Continuous Delivery Pipelines

There are, broadly, two areas where containers fit into your devops

workflow: for builds, and for deployment. They are often used together,

but do not have to be.

Builds

  • Synchronizing build environments: It can be difficult to keep

    build environments synchronized between developers and CI/CD

    servers, which can lead to unexpected build failures or changes in

    behaviour . Docker images let you specify exactly the build tools,

    libraries, and other dependencies (including their versions)

    required without needing to install them on individual machines, and

    distribute those images easily. This way you can be sure that

    everyone is using exactly the same build environment.

  • Managing changes to build environments: Managing changes to

    build environments can also be difficult, since you need to roll

    those out to all developers and build servers at the right time.

    This can be especially tricky when there are multiple branches of

    development some of which may need older or newer environments than

    each other. With Docker, you can specify a particular version of the

    build image along with the source code, which means a particular

    revision of the source code will always build in the right

    environment.

  • Isolating build environments: One CI/CD server may have to build

    multiple projects, which may have conflicting requirements for build

    tools, libraries, and other dependencies. By running each build in

    its own ephemeral container created from potentially different

    Docker images, you can be certain that these builds environments

    will not interfere with each other.

Deployment

  • Runtime environment bundled with application : The CD system

    builds a complete Docker image which bundles the application's

    environment with the application itself and then deploys the whole

    image as one "atomic" step. There is no chance for configuration

    management scripts to fail at deployment time, and no risk of the

    system configuration to be out of sync.

  • Preventing malicious changes: Container security is improved by

    using immutable SHA digests to identify Docker images, which means

    there is no way for a malicious actor to inject malware into your

    application or its environment.

  • Easily roll back to a previous version: All it takes to roll

    back is to deploy a previous version of the Docker image. There is

    no worrying about system configuration changes needing to be

    manually rolled back.

  • Zero downtime rollouts: In conjunction with container

    orchestration tools like Kubernetes, it is easily to roll out new

    image versions with zero downtime.

  • High availability and horizontal scaling: Container

    orchestration tools like Kubernetes make it easy to distribute the

    same image to containers on multiple servers, and add/remove

    replicas at will or automatically.

  • Sharing a server between multiple applications: Multiple

    applications, or multiple versions of the same application (e.g. a

    dev and qa deployment), can run on the same server even if they have

    conflicting dependencies, since their runtime environments are

    completely separate.

  • Isolating applications: When multiple applications are deployed

    to a server in containers, they are isolated from one another.

    Container security means each has its own file system, processes,

    and users there is less risk that they interfere with each other

    intentionally. When data does need to be shared between

    applications, parts of the host file system can be mounted into

    multiple containers, but this is something you have full control

    over.

For more information, see:

Implementing Containers into Your DevOps Workflow

Containers can be integrated into your DevOps toolchain incrementally.

Often it makes sense to start with the build environment, and then move

on to the deployment environment. This is a very broad overview of the

steps for a simple approach, without delving into the technical details

very much or covering all the possible variations.

Requirements

  • Docker Engine installed on build servers and/or application servers

  • Access to a Docker Registry. This is where Docker images are stored

    and pulled. There are numerous services that provide registries, and

    it's also easy to run your own.

Containerizing the build environment

Many CI/CD systems now include built-in Docker support or easily enable

it through plugins, but   docker   is a command-line application which

can be called from any build script even if your CI/CD system does not

have explicit support.

  1. Determine your build environment requirements and write a Dockerfile based on an existing Docker image, which is the

    specification used to build an image for build containers. If you

    already use a configuration management tool, you can use it within

    the Dockerfile. Always specify precise versions of base images and

    installed packages so that image builds are consistent and upgrades

    are deliberate.

  2. Build the image using   docker build   and push it to the Docker registry using   docker push .

  3. Create a Dockerfile for the application that is based on the build

    image (specify the exact version of the base build image). This file

    builds the application, adds any required runtime dependencies that

    aren't in the build image, and tests the application. A multi-stage

    Dockerfile  can be used if you don't want the application deployment image to include all the build dependencies.

  4. Modify CI build scripts to build the application image and push it

    to the Docker registry. The image should be tagged with the build

    number, and possibly additional information such as the name of the

    branch.

  5. If you are not yet ready to deploy with Docker, you can extract the build artifacts from the resulting Docker image.

It is best to also integrate building the build image itself into your

devops automation tools.

Containerizing deployment

This can be easier if your CD tool has support for Docker, but that is

by no means necessary. We also recommend deploying to a container

orchestration system such as Kubernetes in most cases.

Half the work has already been done, since the build process creates and

pushes an image containing the application and its environment.

  • If using Docker directly, now it's a matter of updating deployment scripts to use   docker run   on the application server with the

    image and tag that was pushed in the previous section (after

    stopping any existing container). Ideally your application accepts

    its configuration via environment variables, in which case you use

    the   -e   argument to specify those values depending on which

    stage is being deployed. If a configuration file are used, write it

    to the host file system and then use the   -v   argument to mount it to the correct path in the container.

  • If using a container orchestration system such as Kubernetes, you

    will typically have the deployment script connect to the

    orchestration API endpoint to trigger an image update (e.g. using

    kubectl set image , a Helm chart, or better yet, a kustomization.).

Once deployed, tools such as Prometheus are well suited to docker

container monitoring and alerting, but this can be plugged into existing

monitoring systems as well.

FP Complete has implemented this kind of DevOps workflow, and

significantly more complex ones, for many clients and would love to

count you among them! See our Devops Services page. For more information, see How to secure the container lifecycle and Containerizing

a legacy application: an

overview.