Best Practices for Kubernetes deployments from Portshift
|Richard Harris in DevOps Monday, January 27, 2020|
Five security best practices for DevOps and development professionals managing Kubernetes deployments hits from Portshift to assist organizations in the detection and remediation of security issues earlier in the development process.
Portshift presents five security best practices for DevOps and development professionals managing Kubernetes deployments. Integrating these security measures into the CI/CD pipeline will assist organizations in the detection and remediation of security issues earlier in the development process, allowing faster and shorter cycles while assuring safe and secure deployments.
The use of containers continues to rise in popularity in enterprise environments, increasing the need for a means to manage and orchestrate them. There’s no dispute that Kubernetes (K8s) has emerged as the market leader in container orchestration for cloud-native environments. Since Kubernetes plays a critical role in managing who and what could be done with containerized workloads, security should be well-understood and managed. It is therefore essential to use the right deployment architecture and security best practices for all deployments.
Because Kubernetes deployments consist of many different components (including: the Kubernetes’ master and nodes, the server that hosts Kubernetes, the container runtime used Kubernetes, networking layers within the cluster and the applications that run inside containers hosted on Kubernetes), securing Kubernetes requires DevOps/developers to address the security challenges associated with each of these components.
To overcome these challenges, below are five security best practices for tackling the K8’s security challenge:
1. Authorization: Kubernetes offers several authorization methods which are not mutually exclusive. It is recommended to use RBAC for authorization policies controlling how the Kubernetes API is accessed using permissions. ABAC is an additional authorization mechanism that provides powerful and fine-grained policies, but it’s more complex and has few operational constraints (e.g API server restart after permission changes).
2. Pod Security: Since each pod contains a set of one or more containers, it is essential to control their deployment configurations. Kubernetes Pod Security Policies are cluster-level resources that allow users to deploy their pods securely by controlling their privileges, volumes access and classical Linux security options such as seccomp and SELinux profiles.
3. Secure the Production Environment: As companies move more deployments into production, that migration increases the volume of vulnerable workloads at runtime. This issue can be overcome by applying the solutions described above, as well as making sure that your organization maintains a healthy DevOps/DevSecOps culture.
4. Securing CI/CD Pipelines on Kubernetes: Running CI/CD allows for the build-out, testing, and deployment of workloads prior to their deployment in K8's clusters. Security must be baked at the CI/CD process to allow developers to quickly discover and mitigate potential vulnerabilities and misconfigurations. Otherwise attackers can gain access when these images are deployed and exploit these vulnerabilities in K8 production environments. Inspecting the code of images and deployment configurations at the CI/CD stage can achieve this purpose.
5. Add Service Mesh to the Network Security Layer: The service mesh addresses common tasks associated with microservices in a unified and agnostic manner. Service mesh automatically balances inter-service traffic based on policies. It also offers a number of security, reliability, and observability benefits that can help manage cluster traffic and increase network stability that is enhanced by a “zero-trust” security model.
A powerful complement to K8’s security infrastructure is the service mesh. It supports a secure cloud-native environment by automatically taking care of service discovery and connection so that both developers and individual microservices do not have to. Used in conjunction with Kubernetes, the service mesh supports applied security at the service level, not just at the network level. The service mesh enables the highest level of security when used in conjunction with identity-based workload protection to secure containers and microservices.
"As the leading orchestration platform, Kubernetes is in active use at AWS, Google Cloud Platform, and Azure," said Ran Ilany, CEO and Co-Founder, Portshift. "With the right and holistic security infrastructure in place, it is set to change the way applications are deployed in the cloud with unprecedented efficiency and agility. Portshift delivers an intuitive and centralized way to govern Kubernetes microservices to make this a reality."