Kubernetes is everywhere. In the last year, it’s been difficult to meet a single person who is working in IT and has not started to investigate this technology. The scope of its reach is seen from software development, QA and infrastructure.
Creating a production-ready Kubernetes platform, ready to accept application migrations from a traditional environment, is by no means a simple task. You need an etcd database, kube-controllers, kube-scheduler, certificates, core-DNS and so on. It also requires an investment of resources and time to determine the appropriate mix of components, and a solid, adaptive, testing methodology to support rapid changes as components are exchanged or enhanced.
Besides the creation of the environment and management processes, an area where companies fall short is strategic and tactical planning. Rarely do companies have the luxury to start from scratch with a container initiative for their infrastructure needs. Your container environment design and technology stack will more likely need to support both new and legacy forms of compute, being careful to avoid duplication of operational assets, resources, and expenses (a functional example of such a stack and design can be found in Figures 1 & 2).
Management & Automation
By utilizing Rancher, you can remove the pain of creating a K8S cluster manually and automate the cluster setup with 1-click automation. It also provides day 0 and day 1 operations for K8S clusters, including provisioning, access-control, global DNS, backup and recovery, monitoring, logging, and cluster upgrades. Using Ansible will allow you to provision CentOS VMs for K8S nodes.
Logging and Monitoring
Rancher has a built-in FluentD deployment that can be leveraged to build an EFK stack. Each cluster can be configured to push the FluentD logs to an Elastic search instance.
Kibana acts as an excellent front-end to view and search Elastic search logs.
Prometheus is an excellent means of collecting monitoring metrics. Use Prometheus server to storetime-series data, alert-manager to manage alerts, node-exporter to export metrics from nodes and Kube-state-metrics to generate metrics for all k8sobjects.
Prometheus, however, does need a UI, for which Grafana can be used to connect to Prometheus-server to provide graphs and meaningful dashboards for monitoring.
Everything in Kubernetes is dynamic and stateless. Well, at least that is how it is supposed to be…but this goes against the principles of a traditional storage solution. Choosing a viable persistent storage solution is one of the toughest challenges you will face. Several popular solutions in the market include Ceph, Rook, StorageIO,and Portworx.
Portworx provides data mobility, high availability, platform independence, encryption and dynamic provisioning of persistent volumes. On worker nodes, we recommend anotherdisk (vmdk) to create a storage pool thru Portworx. Portworx comes with its intelligent scheduler called stork, which can be adopted to save on licensing costs by only installing Portworx on a few worker nodes. Deployments that require persistent storage, can utilize stork as the scheduler so that these pods only get deployed on the worker nodes that have Portworx installed.
Container security is an ever-evolving subject and with the dynamic nature of the pods, it becomes extremely essential to have visibility and control over all the processes and communication that occurs inside a pod. Neuvector provides an excellent solution for continuous run time-protection of hosts and pods. Kubernetes clusters, nodes, pods and container images can be scanned by Neuvector for vulnerabilities for protection and as a side benefit can provide docker and kubernetes benchmarks for the clusters. It can act as a network firewall by learning the good behavior of a pod/service and dynamically create security policies based on that. Once the service is in 'protect mode', it will prevent any unauthorized process or network communication from running for that pod or service.
Once applications are deployed inside the K8S clusters, there are several options for exposing them outside the cluster. Another consideration is if you are migrating applications from legacy infrastructures to containers and want to preserve a roll-back posture or have services in the legacy environment that want to consume services now migrated to the K8S cluster.
AVI Networks offers a software-defined load balancer that has a control plane and a service plane. It provides load balancing, traffic-management, scalability, and end-to-end automation for K8S services. AVI deploys the service engines as PODS on the K8S cloud, which handles the north-south traffic and load balancing for the K8S services.
Configured with a DNS server and a pool of IPs for IPAM, AVI automatically creates a virtual service every time an ingress is created in the K8S cloud. It will assign an IP from IPAM,create a DNS entry and configure the back end pool of pods. AVI also provides the ability to add various HTTP policies along with network security policy thru annotations in your ingress.
Kubernetes helps in making true Continuous Deployment a reality since everything is packaged inside a container;including its dependencies. It also takes care of scheduling the workload on specific worker nodes. The rolling update strategy execute continuous deployments with zero downtime.
Jenkins is a great tool for Continuous Integration and image build-outs, offering integration with Gitlab,Nexus artifact, Jfrog artifactory, SonarQube, Neuvector, Fortify, Helm and Rancher to provide a complete CI/CD pipeline.
Helm packages the whole application stack into charts, including everything the application needs - the pods, services, secrets, ingresses, persistent storage, etc. Helm also makes it possible to have a consistent deployment across different environments (see Figure 3 for the CI/CD component diagram).
To sum up, there are multiple ways to solve this puzzle and with new developments every day, it is becoming easier than ever to have your applications deployed in a K8S cluster.I hope this gives you a good insight into what areas you need to focus on and what are some of your options to make K8S cluster a reality for your organization.