Challenges Of Kubernetes Cost Observability & AWS Cost Management

Nov 11th, 2021
Challenges Of Kubernetes Cost Observability & AWS Cost Management
URL Copied

Kubernetes Cost Optimization

Containers are the uprising technology of the last decade with their flexible scalability and portability. According to a Gartner report, by 2022, 75% of companies will run containers in production. This means that we will be deploying more and more containerized applications to the cloud. And as Kubernetes cost moitoring applications become more complex and demanding due to having more and more containers, the need for a container orchestrator is inevitable. 

Kubernetes is the de facto container management platform in the market. But while deploying an application into Kubernetes is a straightforward process, cost management was never one of Kubernetes’ core features. 

In this blog, we’ll discuss the challenges of Kubernetes cost monitoring estimation and tracking.

Some Background

Kubernetes clusters consist of a control plane and several worker nodes. The control plane is usually free-of-charge in cloud providers or billed hourly per cluster. Worker nodes are the virtual machine instances where actual Kubernetes workloads are running. VM instances come in different flavors—CPU, RAM, and storage—and hence have different price tags. 

The Kubernetes control plane distributes applications as pods to workers and runs on them. It is also possible to define resource requirements for pods to allocate CPU and RAM and assign storage. In addition, the number of pods can scale up and down with actual usage. With multiple applications running on the same worker nodes and being scaled up and down automatically, it’s fairly complex and time-consuming to calculate the actual or estimated cost of an application running in Kubernetes. 

Kubernetes thus presents a major challenge when it comes to cost management, but there are several opportunities available to overcome them with ease. Let's discuss some of them.

Abstraction of Kubernetes Cloud Observability: Applications to Infrastructure

The Kubernetes API creates a strong abstraction between the infrastructure and applications running on a cluster. Let's assume you’ve deployed a MySQL database to your Kubernetes cluster. By design, Kubernetes not only starts a MySQL container in one of the nodes in your cluster, it also creates volumes, secrets, stateful sets, replica sets, pods, service account users, configuration maps, services, and ingresses to make your new MySQL database scalable, reliable, and reachable. 

You will see the costs of the infrastructure items in the AWS Cost and Usage Reports (AWS CUR) such as storage or compute. However, there will be no way to break the abstraction and attribute the cost of the cloud infrastructure to K8s resources such as your MySQL.

The side effect of the abstraction is the question: To which Kubernetes resource should you assign the costs? In other words, you need to find a Kubernetes unit for cost calculation—such as pods, deployments, or namespaces. In the end, you should know how much money your dev namespace—or test deployments, or any other Kubernetes resource—burns out. Next, you need to dive into your AWS Cost and Usage Reports (AWS CUR) and start allocating the total costs to your Kubernetes unit, which we’ll discuss next.

Allocation of Total Costs

Kubernetes resources are dynamic in terms of their numbers and infrastructure usage. In other words, you may have three pods using 256 MB of memory now, and you may have five pods using 512 MB of memory during peak time. 

Step 1 — Track the actual usage: You’ll see the total CPU, memory, networking, and monthly storage cost in your public cloud billing reports, such as AWS CUR. With simple math, you can then allocate CPU and memory according to the Kubernetes cost monitoring resource limits or actual usage metrics. 

Step 2 — Distribute overhead expenses: You can do this with costs for, say, networking and storage; if the network cost is shared between teams, namespaces, and applications, you will need to find a method to allocate its cost. 

New call-to-action

Multi-Cloud Landscapes

Kubernetes is an open-source platform that is widely provided by public clouds and also on-premises systems. So it’s a common use case to have a Kubernetes cluster in the Tokyo region of GCP and the Oslo region of Azure. This makes your applications highly available while also closer to your customers in the corresponding country and region. 

Although Kubernetes is the same, the billing reports of your different cloud providers will be completely different in terms of structure. Also, the servers of other cloud providers will have different flavors and price tags. This means you need to combine the billing data and calculations from multiple providers if you go for a multi-cloud or hybrid-cloud approach. 

Savings Insights

Savings Insights and Opportunities

Cost monitoring systems should lead to savings insights and estimations—and these are even more critical when an application and its infrastructure are dynamic like Kubernetes. Luckily there are several “low-hanging fruit” that can create savings opportunities:

  • Define, measure, and update resource requirements of pods (Rightsizing): If you can spot pods with surplus resource allocation, you can optimize and free up resources. 

  • Distribute workloads to cheaper regions, zones, and nodes: Pricing is not the same for each region, zone, and server per aws cost management provider. Always consider moving some of your applications to other parts of the cloud to optimize costs.

  • Create a mix of different nodes and use an optimization strategy for scheduling: Kubernetes can assign workloads to particular servers with grouping applications and node groups. With an optimal scheduling strategy, you can decrease the cost of total resources allocated. 

With these guidelines and constant monitoring, which we’ll cover next, you will have a chance to cut your Kubernetes clusters' cloud bills.

Monitoring and Alerts

Monitoring your applications to watch their metrics and health is an inevitable necessity in the cloud. Monitoring systems can collect, store, aggregate, and visualize your application metrics. Plus, you can define alerts based on specific metrics and receive alerts (even a phone call) if something critical happens during the night or on your lovely holiday on a Caribbean beach. 

If you’re using Kubernetes, then eventually you will get a surprise bill. We have all been there when faulty software overloads the cluster and skyrockets your resource usage. Your cost management system has to collect cost metrics and be smart in order to find inconsistencies and create alerts when there is any sharp increase in cloud costs. 

Conclusion

Kubernetes comes with many features to run scalable and reliable applications in the cloud, but it doesn’t come with a cost management system out of the box. The current solutions are not good enough for determining the cost of running distributed applications in the cloud. However, with the increased adoption of Kubernetes & AWS cost management containerized applications, the cost management of such systems has become more critical. 

We’ve summarized the challenges of Kubernetes cost management and listed some opportunities to solve them in this blog. The bottom line is: You have to track and optimize what you’re actually paying for your products and applications running in Kubernetes clusters. 

If you’re looking for a self-service, no-code platform to treat your costs as a priority metric and attribute each dollar of your cloud bill to its proper place, get in touch with Finout today!

Main topics