Containers and OpenShift
A container is a way to package the application and its dependencies so it becomes very easy to install the application. We’ve used containers in conjunction with OpenShift for a few projects recently.
We recently blogged on containers — what they are, their benefits, and some of the open source tools used to create them (Docker and Kubernetes). This first/original blog was seeded by a presentation to a major university client in conjunction with Des Drury.
In this blog, we’ll briefly recap some of those elements, but also talk about OpenShift (thanks to Michael Schmid from amazee.io for great info on OpenShift) and how it can be used within a containerisation strategy.
What are containers?
In the software world, a container is a uniform structure where any application can be stored, transported and run. A container is a way to package the application and its dependencies so it then becomes very easy to install the application.
How containers work
A container surrounds an application in its own operating environment, and can be placed on any host machine. The application runs in the userspace of the operating system, an area of the computer memory that’s kept completely separate from the OS kernel. It doesn’t need a hypervisor to run. Developers decide what’s in the container, including the dependencies. However, containers give you a better way to package those things.
A container image
A container is a running image. The most common tool used to create containers is open source tool Docker. So, a Docker image is a container image created with Docker. To update an application, developers change the code in the container image, then redeploy the image to run on the host OS.
Key benefits of containerisation
There are many benefits to containerisation, which is why it’s becoming such a widely used tool. Some of the containerisation benefits include:
- More portable (than virtual machines (VMs))
- Faster and more efficient (than VMs)
- Greater flexibility
- Simple packaging format
- Rapid and consistent deployment of workloads
- Robust runtime environment for scaling and self-healing
- Standard management interface
- Can avoid vendor lock-in
Kubernetes is a tool that enables automatic deployment, scaling and management of application containers. Like Docker, Kubernetes is open source, although it was originally built by Google.
Kubernetes uses namespaces to map containers into environments.
Image courtesy of Des Drury
A namespace contains one or more pods (containers) and/or other Kubernetes resource types.
Services that support the applications can be deployed into the cluster, or a cloud provider/SaaS solution can be used (or a combination of both). If all support services are running within the cluster then it becomes easy to migrate between different cloud providers.
Multiple clusters can be used to provide additional redundancy and localised placement of workloads for a given region. The clusters can be any combination of cloud (e.g. Google, Azure or AWS) or on-premise (e.g. OpenStack).
Docker and Kubernetes
Containerisation started with Docker in 2013 and in 2017 Docker incorporated Kubernetes into the Docker platform. Now, both systems integrate seamlessly to provide the benefits of containerisation with the management capabilities of Kubernetes. And all open source.
While Kubernetes brings features like orchestration, self-healing and others that are essential to run Docker in production, it’s also a fast-changing software. Kubernetes can be compared to a vanilla Linux Kernel, the Linux Kernel is running almost everything today, but nobody is running the Linux Kernel directly, it’s always bundled in a Linux distribution. OpenShift is such a distribution for Kubernetes and has the following advantages:
OpenShift is fully focused on enterprise-grade security. Containers are running by default as a random user that cannot be guessed by the developer writing the Docker image. This increases security drastically.
OpenShift brings a networking system to Kubernetes that gives every namespace (also called project within OpenShift) a virtual network that cannot be accessed from outside the namespace.
The Kubernetes code moves very fast, as soon as a new version of Kubernetes comes out, there is no support for the older ones, no bug fixes are backported into older versions. RedHat maintains versions of OpenShift much longer and also provides backports of critical patches for older versions.
The ‘default’ Kubernetes upgrade path is very rough and sometimes the only way to upgrade a Kubernetes cluster is to build a completely new one and migrate all Docker containers to the new one. This is very expensive and time intensive. OpenShift provides fully automatic upgrade paths that are fully tested.
RedHat also provides an enterprise 24/7 support and dedicated security response team.
OpenShift in action
We recently used OpenShift as part of the platform setup for a large government client. amazee.io was engaged to carry out the OpenShift cluster setup in the Amazon Web Services (AWS) Sydney region. The cluster was setup in a highly available configuration spread across all three availability zones for redundancy. This redundancy allows the cluster to withstand losing two of the three zones and still maintain service availability.
Lagoon was installed inside the cluster and used for continuous delivery of all projects linked to the platform. Any new branch created in a source repository automatically deploys the branch to OpenShift. Lagoon includes support for a local development environment based on Docker Compose. This allows developers to setup an exact production clone of the production platform because it’s reusing the same base images.
Get in touch
If you’d like to know more about containers, feel free to contact us using the form below or call us on 1300 727 952.