Kubernetes Labels: Examples, Use Cases, and Best Practices [2025]

May 14th, 2025
Kubernetes Labels: Examples, Use Cases, and Best Practices [2025]
URL Copied

What are Kubernetes Labels?

K8 labels are key-value pairs that are part of an application's metadata. With labels, you can optimize how you leverage the Kubernetes API and third-party integrations. For example, labels allow the cluster to communicate with client tools and libraries such as kubectl and Helm. Not only that, they ensure that everyone on your team has instant access to the application metadata they need.

Common use cases of Kubernetes labels are grouping resources by environment, performing bulk operations, and scheduling pods based on node labels.

Kubernetes labels are highly customizable, but that is not to say there are no conventions. We’ll explain these conventions, describe the Kubernetes standard labels, and show how to use Kubernetes labels to your advantage.

 

Kubernetes Labels vs. Annotations

While both labels and annotations attach metadata to Kubernetes objects, they serve different purposes.

Labels are intended for identifying and selecting resources. Kubernetes uses them for grouping, querying, and organizing objects like pods and services. For example, a label like app=frontend lets you target all frontend pods in a deployment or service selector.

Annotations store non-identifying metadata. This includes build information, deployment timestamps, or configuration details that aren’t used for selection or filtering. Annotations can hold larger and more complex data than labels, and Kubernetes does not use them internally to organize or manage resources.

Here is an example of a label:

```YAML

 

"metadata": {

 

  "labels": {

 

"tenant": "explo-6834"

 

"environment": "production"

 

"tier": "backend"

 

"app": "#5784762"

 

"version": "1.82"



  }

 

}

```

 And here is an example of an annotation:

```YAML

"metadata": {

  "annotations": {

"first_deployment": "1646126616"

"deployed_by": "daena@example.com"

  }

}

```

Note, if you have set up an annotation that later becomes used to group objects, you probably need to consider that this has been elevated to a "label", and reassign it.

 

When Should You Use Kubernetes Labels?

Group Resources by Environment or Application Component

Kubernetes labels can organize resources in a flexible way. Since labels are arbitrary key-value pairs, you can design them to reflect environment (env=prod), application component (tier=backend), or any custom categorization needed. This makes it possible to run targeted queries, like retrieving all pods running a particular version of an app (version=2.1). Labels are indexed by the Kubernetes API, which means label-based queries are efficient and fast, even at scale.

Such granular data allows you to make specific calls. For example, you want to list the status of all production pods:

```

kubectl get pods -l 'environment=production'

```

This is far superior to having to make an API call for all pods and then filtering through the output after.

Labels are also very useful for release management. Found a backend bug and want to release a patch? Simply deploy a new set of v:1.83 backend instances, replace tier:backend, version:v1.82 with tier:backend, v:1.83 in the service label selector. The pods running v1.82 were orphaned, and you have deployed a new set of instances.

Perform Bulk Operations

Labels make bulk operations predictable and repeatable. If your deployment process includes a temporary set of pods labeled with phase=pre-migration, you can easily delete or update them in one command once they're no longer needed. Labels also help with automation scripts and CI/CD pipelines by giving tools a reliable way to find and act on related resources.

Another use case is applying updates or restarts across a group. For example, if you tag all staging resources with env=staging, you can restart every pod in that group during a test rollout using kubectl rollout restart deployment -l env=staging. This avoids hardcoding resource names and keeps workflows scalable.

Schedule Pods Based on Node Labels

Node labels allow precise control over where pods get scheduled. By applying labels like region=us-east or hardware=gpu, you can use node selectors or affinity rules in your pod specs to target suitable nodes. This is crucial for workloads with specific requirements, such as GPU-bound machine learning models or low-latency network services.

For example, if a pod must run on SSD-backed nodes, label those nodes with disktype=ssd and use a nodeSelector like:

nodeSelector:

  disktype: ssd

This ensures that the scheduler only places the pod on nodes with matching labels. It also improves resource utilization and compliance with operational constraints.

Kubernetes Standard Labels

Kubernetes services and replication controllers use labels to manage pods, target workloads to specific instance types, and control services across multiple cloud provider zones. Therefore, label use is hard-baked into the Kubernetes design.

Standard labels include:

key:

pair (the property's value)

app.kubernetes.io/name:

Name of the application

app.kubernetes.io/part-of:

The higher-level application this micro-service supports

app.kubernetes.io/managed-by:

The package management system

Many standard labels are auto-filled by K8s, so it is well worth applying them for your daily operations and client tools. For example:

`app.kubernetes.io/managed-by: "" `

will be populated with:

`app.kubernetes.io/managed-by: helm`

if Helm is the package manager.

Working with Kubernetes Labels

Constraints on Labels

The following syntax constraints are applied to labels:

  • Key must be unique within a given object
  • Min 0-max 63 characters for the segment (required): 253 for prefix (optional)
  • Start and end with alphanumerics [a-z0-9A-Z] (unless length is 0)
  • dashes "-", underscore "_" and dot "." allowed (internally)
  • (Optional) prefix must be a series of DNS labels separated by dots and followed by a slash

The inclusion of the prefix allows users and automated system components, for example, kube-scheduler, or third-party integrations, to manage resources.

Let's unpack those two syntax constraints that could cause confusion a little further:

  • Enforcing the key as unique prevents us from making copy/pasta mistakes such as duplicating the environment property.

```YAML

"metadata": {

  "labels": {

"environment": "production"

"environment": "development"

  }

}

```

  • Consider a standard label such as: app.kubernetes.io/name:
  • {app.kubernetes.io} is the prefix providing the DNS label
  • {name} is the segment.

Searches

The Kubernetes API supports searches for:

  • equality, i.e., 1:1 matches
  • nequality, i.e., specify a "does not match"
  • sets

Equality uses = (or, if the fear of resetting a value leaves you feeling itchy, ==). Inequality is the standard !=, and a set or array of values is specified with a comma separator.

From our previous example of labeling our environment, therefore, we could use:

  • equality, to return the data on the pods in production:

```

kubectl get pods -l 'environment==production'

 

```

 

  • inequality to return the pods in production and development

```

kubectl get pods -l 'environment!=(staging)'

```

Or we can search for sets, i.e., an array. Set searches apply "in", "notin", and "exists":

```

kubectl get pods -l 'environment in (production)'

 

```

to return the data on the pods in production.

or

```

kubectl get pods -l 'environment notin (development, staging)'

```

where the separating "," comma acts as an AND (&&) operator.

Note that OR (||) is not supported.

Multiple Conditions

If you provide more than one equality condition, then the matching object/s must satisfy all of the constraints. For example:

environment=production

tier!=frontend

Similarly, set-based conditions return the sub-set of objects that match all the given conditions.

Thus, according to our example:

```YAML

"metadata": {

  "labels": {

"tenant": "explo-6834"

"environment": "production"

"tier": "backend"

"app": "#5784762"

"version": "1.82"

  }

}

```

Where environment may be: staging, development, or production, then:

```

kubectl get pods -l environment notin (development, staging),tier in (backend)

```

would return the same object as

```kubectl get pods -l environment=production,tier!=frontend```

Kubernetes Labels: Best Practices

1. Create a Consistent Labeling Strategy

When a system is designed to be so open-ended, it is vital to apply your own strategies and conventions to ensure that your labels provide you with the functionality you need.

Once such conventions are established, you can add checks at the Pull Request (PR) level to verify that configuration files include all the required labels.

Setting an informative prefix can assist you to instantly identify which service or family of functions a label applies to. It is good practice to choose a prefix to represent your company and sub-prefixes to identify specific projects.

If you want to see the labels applied to an object, you can add this flag to your call:

```kubectl get pod my-example-pod --show-labels```

2. Use Templates

In K8s, the concept of a template has a very specific application, thanks to pod templates.

So, let’s apply the term beyond pod templates, because it is good practice to apply a rigid structure to shape all your configuration files with ready-to-use patterns. From `PodTemplate` (specifications for creating Pods) to metadata structures for applications, your team will all be on the same page with a ready-to-use label strategy in place.

What you tag will depend on your needs, but will probably include those provided in our examples, such as:

  • environment
  • tier
  • version
  • application uuid

And, in a multi-tenant environment, where a pod is dedicated to one tenant, never forget:

  • tenant

Because you will love the cloud cost management that Finout can hand you by including tenancy!

Once you have defined a labeling strategy that teams can apply, the next best practice step is to validate the process. Conduct static code analysis of all resource config YAML files to verify the presence of all required labels. A PR should only be merged if the configuration file provides all the required labels

3. Automate Labelling for CI/CD

Within your continuous integration/continuous delivery (CI/CD) pipeline, you can automate some labels for cross-cutting concerns. Attaching labels automatically with CD tooling ensures consistency and spares developer effort. Again, validate that those labels are in place: CI jobs should enforce proper labeling by making a build fail and notifying the responsible team if a label is missing.

You can define variables on jenkinsfiles or github actions workflows and parameterize the label part to automate labels in kubernetes manifests. At the same time, you can use Helm to easily deploy each version you want. And automated labels will help you here for each deployment strategy (canary, rolling update etc.).

Helm sample usage:

apiVersion: apps/v1

kind: Deployment

metadata:

  name:

  labels:

    app.kubernetes.io/name:

    app.kubernetes.io/instance:

  annotations:

    kubernetes.io/change-cause:

Reaping the Benefits in a Multi-Tenancy or Multi-Cloud Environment

Kubernetes can run on any public cloud provider or on-premises system — or apply a hybrid approach. Therefore, it is possible to have different bills from different service providers for your clusters. Even if you are not stretched across different providers, you may provide services to multiple tenants via your pods on servers and load balancers. That means you need a system capable of tracking the cost of these components.

4. Use Labels for Simplified Searching

One of the benefits of applying a labeling strategy is that although each component may be granular, it becomes part of a bigger picture. And, those labels let you zoom into or rebuild that picture. Let's say you have a multi-tenant environment. If you want to know all of the services a particular tenant uses, then you can collate that tenant's data just by filtering.

Say our example tenant:

"tenant": "explo-6834"

is supported on tier==backend and tier==frontend.

Should you receive a query with regards to a perceived service issue, a simple API call retrieves all the service-related data you need for that tenant. No need to look up any system diagrams to see what applications support that tenant's service.

You retain your modularity without losing any visibility.

5. Leverage Labels for Advanced Cost Observability

Meaningful cost observability requires that FinOps teams can accurately calculate the costs per pod, pod-label, deployment, namespace, and other resources in your cluster. The key to achieving this is implementing a well-executed labeling strategy.

Cost insights can be relatively simple to achieve if you are in the enviable position of being able to assign a Kubernetes namespace to each tenant. In reality, DevOps usually faces the challenge of measuring a tenant’s usage of shared, autoscaled resources – which makes cost allocation more complicated. Cost allocation often requires assigning a tenant’s pro-rated usage of the cluster’s resources, including CPU, GPU, memory, disk, and network. This is where labels assist FinOps to allocate cost per customer, tenant, dev team, or business application.

But everything is shared! Don’t worry if your autoscaled architecture means that many pods support a multi-tenant service. Finout can also provide an abstraction layer, the Unit of Economics, to let FinOps zoom in on single-tenant costs.

Conclusion

Whether a dev wants to debug an issue, DevOps wants to shut down non-essential infrastructure resources over a long weekend, or FinOps wants to understand the costs in a multi-tenant environment, in K8s, labels give you that power.

As you may have noticed, managing resource usage in a highly volatile environment means that tracking actual usage levels and performing cloud cost management to distribute overhead expenses is no small challenge. Whether you are deploying Kubernetes clusters directly or with a cloud service provider such as AWS EKS, a robust tagging strategy pays huge dividends.

Reach out to learn more

Author spotlight

Yizhar is the CTO and Co-Founder of Finout. He has more than a decade of extensive experience in data science and data architecture.

Yizhar-Gilboa-1
Yizhar Gilboa
Co-Founder & CTO, Finout
Main topics