Devops

Devops

Devops

Dec 11, 2020

Why we built Kube360

Why we built Kube360

Why we built Kube360

Over a year ago, FP Complete began work on Kube360. Kube360 is a

distribution of Kubernetes supporting multiple cloud providers, as well

as on-prem deployments. It includes the most requested features we've

seen in Kubernetes clusters. It aims to address many pain points

companies typically face in Kubernetes deployments, especially around

security. And it tries to centralize the burden of keeping up with the

Kubernetes treadmill to one centralized code repo that we maintain.


That's all well and good. But what led FP Complete down this path? Who

is the target customer? What might you get out of this versus other

alternatives? Let's dive in.


Repeatable process

In the past decade, FP Complete has set up dozens of production clusters

for hosting container-based applications. We have done this for both

internal and customer needs. Our first clusters predate public releases

of Docker and relied on tooling like LXC. We have kept our

recommendations and preferred tooling up to date as the deployment world

has (very rapidly) iterated.


In the past three years, we have seen a consistent move across the

industry towards Kubernetes standardization. Kubernetes addresses many

of the needs of most companies around deployment. It does so in a

vendor-neutral way, and mostly in a developer-friendly way.


But setting up a Kubernetes cluster in a sensible, secure, and standard

fashion is far from trivial. Kubernetes is highly configurable, but out

of the box provides little functionality. Different cloud vendors offer

slightly different features.


As we were helping our clients onboard with Kubernetes, we noticed some recurring themes:

  • Clients were looking for significant guidance around best practices with Kubernetes

  • We ended up deploying clusters that were largely identical for different clients

  • Maintaining each of these clusters became a large task on its own

We decided to move ahead with creating a repeatable process for setting

up Kubernetes clusters. We dogfooded this on ourselves and have been

running all FP Complete services on Kube360 for about nine months now.

Our process supports both initial setup as well as upgrades to the

latest version of Kubernetes itself, as well as the underlying

components.


With this in place, we have been able to get clients up and running far

more rapidly with fully functioning, batteries-included clusters. The

risk of misconfiguration or mismatch between components has come down

significantly. And maintenance costs can now be amortized across

multiple projects, instead of falling to each individual DevOps team.


What we included

Our initial collection of tools was based on what we were leveraging

internally and what we had seen most commonly with our clients. This is

what we consider a "basic batteries included" Kubernetes distribution.

The functionality included:


  • Metrics collection

  • Monitoring dashboards

  • Log aggregation and search/index

  • Alerts

  • In-cluster Continuous Deployment

We based our choices of defaults here on best-in-class open-source

offerings. These were tools we were already familiar with, with great

support across the Kubernetes ecosystem. It also makes Kube360 a less

risky endeavor for our clients. We have strived to avoid the common

vendor lock-in present in many offerings. With Kube360, you're getting a

standard Kubernetes cluster with bells and whistles added. But you're

getting it faster, more well tested, and maintained and supported by an

external team.


The one curveball in this mix was the inclusion of Istio as a service

mesh layer. We had already been investigating Istio for its support of

in-transit encryption within the cluster, a feature we had implemented

previously. Our belief is that Istio will continue to gain major

adoption in the Kubernetes ecosystem, and we wanted to future proof

Kube360 to be prepared for this.


Cloud native

Kube360 is designed to run mostly identically across different cloud

vendors, as well as on-prem. However, where possible, we've leveraged

cloud native service offerings for tighter integration. This includes:


  • Leveraging cloud native Kubernetes control plane offerings, like Amazon's EKS or Azure's AKS

  • Defaulting to cloud-specific secrets management instead of using the default secrets engine or a third-party tool like Vault

  • For durability and cost-effectiveness, we use cloud specific blob

    storage offerings, together with wrappers to abstract over the

    underlying APIs


Fully configurable

We've tried to keep the base of Kube360 mostly unopinionated. But in our

consulting experience, each organization tends to have at least few

modifications needed to a "standard" Kubernetes setup. The most common

we experience is integration with SaaS monitoring and logging solutions.


We've designed configurability from the ground up with Kube360. Outside

of a few core services, each add-on can be enabled or disabled. We can

easily retarget metrics to be sent to a third party instead of

intracluster collection. Even more fundamental tooling, such as the

ingress controller, can be swapped out for alternatives.


Authentication

The biggest addition we've made to standard tooling comes to

authentication. In our experience, the Achilles Heel of many cloud

environments, and particularly Kubernetes environments, is mismanagement

of authentication. We've seen many setups where credentials leverage:


  • Long term lifetimes

  • No Multi-Factor Authentication

  • Shared credentials across multiple users and services

  • Overly broad privilege grants

The reason for this is, in our opinion, quite simple. Doing the right

thing is difficult out of the box. We believe this is a major weakness

that needs to be addressed. So, with Kube360, we've dedicated

significant effort to providing a best-in-class authentication and

authorization experience for everyone in your organization.


In short, our goal with Kube360 is to:

  • Leverage existing user directories and credentials. You shouldn't need yet another password.

  • Make it easy to grant everyone in your organization access to the

    cluster. We believe in democratizing access. Executives should be

    able to easily gain read-only access to informational dashboards, so

    they feel confident in their services.


  • Ensure credentials are all per-user, time based, and never

    copy-pasted through screens. We heavily leverage open standards,

    like OpenID Connect.


  • Leverage a single set of credentials is carried through not just

    Kubernetes, but all add-ons provided with Kube360, including

    dashboards, log indexing, and Continuous Deployment.


  • Provide easy command line access to the Kubernetes cluster (and, in

    the case of Amazon, all AWS services) leveraging secure and easy

    credential acquisition.


We strongly believe by making the right thing easy, we will foster an

environment where people will more readily do the right thing. We also

believe that making the cluster an open book for the entire

organization, including developers, operators, and executives, we can

build trust within an organization.


Surprises

Since initial development and release, we've already seen some requests for features that we had not anticipated so early on.

The first was support for deploying Windows Containers. We have

previously deployed hybrid Linux/Windows clusters but had always kept

the Windows workloads on traditional hosting. That was for a simple

reason: our clients historically had not yet embraced Windows

Containers. At this point, Kube360 fully supports hybrid Windows/Linux

clusters. And we have deployed such production workloads for our

clients.


On-prem was next. We've seen far more rapid adoption of "bleeding edge"

technology among cloud users. However, at this point, Kubernetes is not

bleeding edge. We're seeing the interest in on-prem increase

drastically. We're happy to say that on-prem is now a first-class

citizen in the Kube360 world, together with AWS and Azure.


The final surprise was multicluster management. Historically, we have

supported clients operating on different cloud providers. But typically,

deployments within a single operating group focused on a single cluster

within a single provider. We're beginning to see a stronger move towards

multicluster management across an organization. This fits in nicely with

our vision of democratizing access to clusters. We have begun offering

high level tooling for viewing status across multiple Kube360 clusters,

regardless of where they are hosted.


The future

We are continuing to build up the feature set around Kube360. While our

roadmap will be influenced by client requests, some of our short-term

goals include:


  • Support for additional cloud providers, in particular Google Cloud.

  • GUI management tools for permissions management. Our authentication

    and authorization story is solid but managing RBAC permissions

    within Kubernetes is still non-trivial. We want to make this process

    as easy as possible.


  • To ease Kubernetes migrations, we intend to include basic scaffolding tools to address common application deployment cases.

  • And finally, we hope to expand our multicluster management tooling to provide better insights and resolution tools.

Learn more

If you're looking to make a move to Kubernetes or are interested in

seeing if you can reduce your Kubernetes maintenance costs with a move

to a managed product offering, please contact us for more information.

We'd love to tell you more about what Kube360 has to offer, demo the

product on a live cluster, and discuss possible deployment options.


Learn more about Kube360