Building a Kubernetes Platform

Nate Motyl

Kubernetes is everywhere. In the last year, it’s been difficult to meet a single person who is working in IT and has not started to investigate this technology. The scope of its reach is seen from software development, QA and infrastructure.

Creating a production-ready Kubernetes platform, ready to accept application migrations from a traditional environment, is by no means a simple task. You need an etcd database, kube-controllers, kube-scheduler, certificates, core-DNS and so on. It also requires an investment of resources and time to determine the appropriate mix of components, and a solid, adaptive, testing methodology to support rapid changes as components are exchanged or enhanced.

Besides the creation of the environment and management processes, an area where companies fall short is strategic and tactical planning. Rarely do companies have the luxury to start from scratch with a container initiative for their infrastructure needs. Your container environment design and technology stack will more likely need to support both new and legacy forms of compute, being careful to avoid duplication of operational assets, resources, and expenses (a functional example of such a stack and design can be found in Figures 1 & 2).

Figure 1. Bridge Container Technology Stack.
 Figure 2. Container Environment Design.

Management & Automation

By utilizing Rancher, you can remove the pain of creating a K8S cluster manually and automate the cluster setup with 1-click automation. It also provides day 0 and day 1 operations for K8S clusters, including provisioning, access-control, global DNS, backup and recovery, monitoring, logging, and cluster upgrades. Using Ansible will allow you to provision CentOS VMs for K8S nodes.

Logging and Monitoring

Rancher has a built-in FluentD deployment that can be leveraged to build an EFK stack. Each cluster can be configured to push the FluentD logs to an Elastic search instance.

Kibana acts as an excellent front-end to view and search Elastic search logs. 

Elastic search GitHub repo

Prometheus is an excellent means of collecting monitoring metrics. Use Prometheus server to storetime-series data, alert-manager to manage alerts, node-exporter to export metrics from nodes and Kube-state-metrics to generate metrics for all k8sobjects.

Prometheus GitHub repo

Prometheus, however, does need a UI, for which Grafana can be used to connect to Prometheus-server to provide graphs and meaningful dashboards for monitoring.

Grafana GitHub repo

Persistent Storage

Everything in Kubernetes is dynamic and stateless. Well, at least that is how it is supposed to be…but this goes against the principles of a traditional storage solution. Choosing a viable persistent storage solution is one of the toughest challenges you will face. Several popular solutions in the market include Ceph, Rook, StorageIO,and Portworx.

Portworx provides data mobility, high availability, platform independence, encryption and dynamic provisioning of persistent volumes. On worker nodes, we recommend anotherdisk (vmdk) to create a storage pool thru Portworx. Portworx comes with its intelligent scheduler called stork, which can be adopted to save on licensing costs by only installing Portworx on a few worker nodes. Deployments that require persistent storage, can utilize stork as the scheduler so that these pods only get deployed on the worker nodes that have Portworx installed.

Portworx can be deployed using this helm chart

Container Security

Container security is an ever-evolving subject and with the dynamic nature of the pods, it becomes extremely essential to have visibility and control over all the processes and communication that occurs inside a pod. Neuvector provides an excellent solution for continuous run time-protection of hosts and pods. Kubernetes clusters, nodes, pods and container images can be scanned by Neuvector for vulnerabilities for protection and as a side benefit can provide docker and kubernetes benchmarks for the clusters. It can act as a network firewall by learning the good behavior of a pod/service and dynamically create security policies based on that. Once the service is in 'protect mode', it will prevent any unauthorized process or network communication from running for that pod or service. 

Neuvector can be deployed using this helm chart

Load Balancing

Once applications are deployed inside the K8S clusters, there are several options for exposing them outside the cluster. Another consideration is if you are migrating applications from legacy infrastructures to containers and want to preserve a roll-back posture or have services in the legacy environment that want to consume services now migrated to the K8S cluster.

AVI Networks offers a software-defined load balancer that has a control plane and a service plane. It provides load balancing, traffic-management, scalability, and end-to-end automation for K8S services. AVI deploys the service engines as PODS on the K8S cloud, which handles the north-south traffic and load balancing for the K8S services.

Configured with a DNS server and a pool of IPs for IPAM, AVI automatically creates a virtual service every time an ingress is created in the K8S cloud. It will assign an IP from IPAM,create a DNS entry and configure the back end pool of pods. AVI also provides the ability to add various HTTP policies along with network security policy thru annotations in your ingress.

CI/CD Tools

Kubernetes helps in making true Continuous Deployment a reality since everything is packaged inside a container;including its dependencies. It also takes care of scheduling the workload on specific worker nodes. The rolling update strategy execute continuous deployments with zero downtime.

Jenkins is a great tool for Continuous Integration and image build-outs, offering integration with Gitlab,Nexus artifact, Jfrog artifactory, SonarQube, Neuvector, Fortify, Helm and Rancher to provide a complete CI/CD pipeline.

Helm packages the whole application stack into charts, including everything the application needs - the pods, services, secrets, ingresses, persistent storage, etc. Helm also makes it possible to have a consistent deployment across different environments (see Figure 3 for the CI/CD component diagram).

Figure 3. Container Component Diagram.

To sum up, there are multiple ways to solve this puzzle and with new developments every day, it is becoming easier than ever to have your applications deployed in a K8S cluster.I hope this gives you a good insight into what areas you need to focus on and what are some of your options to make K8S cluster a reality for your organization.

Share or like on social media: 

More from the blog

6 Tips for Effectively Managing Remote Work

Given the present working landscape of COVID-19, here are some tips to remotely work and learn effectively.

Read Story

Use Cases of Ansible AWX

Automating with AWX will help in scaling and making technology adjustments on time thus reducing human error. This will help in providing faster end-to-end solution from development to production to the customers, thus increasing workforce productivity.

Read Story

Automating IT infrastructure with Ansible AWX

Automation has become a necessity at an enterprise level in order to provide faster IT services. It has become an important aspect the need to provision platforms for building network and infrastructure without the need of human intervention.

Read Story

EIQ + IQ = Success in Project Management

The project managers who cultivate their Emotional Intelligence Quotient have an essential talent for leading people. It’s their characteristic empathy that allows them to understand the motivations and situations of those they work with. And they forge ahead with the full support of their project team.

Read Story

Building a Kubernetes Platform

Creating a production-ready Kubernetes platform, ready to accept application migrations from a traditional environment, is by no means a simple task. The scope of its reach is seen from software development, QA and infrastructure. In the last year, it’s been difficult to meet a single person who is working in IT and has not started to investigate this technology.

Read Story

IT Services: Distributed Availability Groups in SQL Server 2016

One of the best features in SQL Server 2016 is the distributed availability group (AG). This new feature gives us the option to scale the synchronization across forwarder nodes instead of on the primary replica only (see Figure 1 & 2 for distributed versus non-distributed AGs).

Read Story

Save Time and Money With ACTIVE’s New IT Services

ACTIVE Network is more than just your technology provider, it’s your partner in success. We know your needs extend beyond a software solution, so we are extremely excited to introduce our new IT services to help you achieve your business goals and build stronger communities.

Read Story

Schedule a personal consultation

See for yourself how ACTIVE can transform your organization.