Autoscaling of a group of nodes
Autoscaling is not available for node groups with GPUs without drivers.
In a Managed Kubernetes cluster, Cluster Autoscaler can be used to autoscale groups of nodes. It helps to optimally utilize cluster resources — depending on the load on the cluster, the number of nodes in the group will be automatically reduced or increased. Cluster Autoscaler is installed automatically when the cluster is created, you just need to enable it to start working.
Consider the recommendations when using Cluster Autoscaler.
Managed Kubernetes uses Metrics Server to autoscale pods.
Recommendations
For optimal autoscaling performance, we recommend:
- do not use more than one autoscaling tool at the same time;
- make sure that the project has quotas for vCPU, RAM, GPU and disk capacity to create the maximum number of nodes in the group;
- specify resource requests in the manifests for pods. For more information, see the Resource Management for Pods and Containers in Kubernetes documentation;
- configure PodDisruptionBudget for pods for which stops are not allowed. This will help to avoid downtime when transferring between nodes;
- not to change node resources manually through the control panel. Cluster Autoscaler ♪ will not ♪ accommodate these changes;
- check that nodes in the group have the same configuration and labels.
Autoscaling with Cluster Autoscaler
Cluster Autoscaler does not need to be installed in the cluster — it is installed automatically when the cluster is created. To use Cluster Autoscaler in a cluster, enable node group autoscaling. After autoscaling is enabled, the default settings are used, but you can customize Cluster Autoscaler for each node group.
Working principle
Cluster Autoscaler works with existing node groups and pre-selected configurations. If a node group is in ACTIVE status, Cluster Autoscaler checks every 10 seconds whether pods are in PENDING status and analyzes the load — requests from pods on vCPU, RAM and GPU. Depending on the results of the check, nodes are added or removed. A group of nodes at this time goes to the PENDING_SCALE_UP or PENDING_SCALE_DOWN status . The status of the cluster during autoscaling is ACTIVE. For more information about cluster statuses, see the View Cluster Status instruction.
The minimum and maximum number of nodes in a group can be set when autoscaling is enabled — Cluster Autoscaler will only change the number of nodes within these limits. If there are at least two working nodes left in other cluster node groups, you can configure autoscaling to zero nodes.
Adding a node
If there are pods in the PENDING status and the cluster does not have enough free resources to accommodate them, the necessary number of nodes will be added to the cluster. In a cluster with Kubernetes version 1.28 and higher, Cluster Autoscaler will work in several groups at once and distribute nodes evenly.
For example, you have two groups of nodes with autoscaling enabled. The load on the cluster has increased and requires the addition of four nodes. Two new nodes will be created in each node group at the same time.
In a cluster with Kubernetes version 1.27 and below, nodes are added one per validation cycle.
Deleting a node
If there are no pods in PENDING status, Cluster Autoscaler checks the number of resources that are requesting pods.
If the requested number of resources for pods on one node is less than 50% of its resources, Cluster Autoscaler marks the node as unnecessary. If the number of resource requests on a node does not increase after 10 minutes, Cluster Autoscaler will check if pods can be moved to other nodes.
Cluster Autoscaler will not migrate pods and therefore will not delete a node if one of the conditions is met:
- pods use PodDisruptionBudget;
- there is no PodDisrptionBudget in Kube-system pods;
- pods are created without a controller — for example, Deployment, ReplicaSet, StatefulSet;
- pods use local storage;
- the other nodes don't have the resources for the pod's requests;
- there is a mismatch between nodeSelector, affinity and anti-affinity rules or other parameters.
You can allow such submissions to carry over — add an annotation to do so:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
If there are no restrictions, pods will be moved and low-loaded nodes will be removed. Nodes are removed one at a time per test cycle.
Autoscaling to zero nodes
In a node group you can configure autoscaling to zero nodes — at low load all nodes in the group are deleted. The node group card with all settings is not deleted. When the load increases, nodes can be added to this node group again.
Autoscaling to zero nodes works only if there are at least two working nodes left in other cluster node groups. Working nodes must remain in the cluster to accommodate the system components that are required for the cluster to function.
For example, autoscaling to node zero will not work if in a cluster:
- two groups of nodes, with one working node in each group;
- one node group with two working nodes.
When there are no nodes in the group, you don't pay for unused resources.
Enable autoscaling with Cluster Autoscaler
If you set the minimum number of nodes in the group to be greater than the current number, it will not scale to the lower limit immediately. The group of nodes will be scaled only after the pods appear in the PENDING status. The same with the upper limit of nodes in the group — if the current number of nodes is greater than the upper limit, deletion will start only after the pods are checked.
You can enable autoscaling with Cluster Autoscaler in the dashboard, via the Managed Kubernetes API, or via Terraform.
- In the dashboard, on the top menu, click Products and select Managed Kubernetes.
- Open the Cluster page → Cluster Composition tab.
- From themenu of the node group, select Change Number of Nodes.
- In the Number of nodes field, open the With autoscaling tab.
- Set the minimum and maximum number of nodes in the group — the value of nodes will change only within this range. For fault-tolerant operation of system components we recommend using at least two working nodes in the cluster, nodes can be in different groups.
- Click Save.
Configure Cluster Autoscaler
You can configure Cluster Autoscaler separately for each node group.
Parameters, their descriptions and default values can be found in the Cluster Autoscaler Parameters table. If you do not specify a parameter in the manifest, the default value will be used.
Manifesto example:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-autoscaler-nodegroup-options
namespace: kube-system
data:
config.yaml: |
150da0a9-6ea6-4148-892b-965282e195b0:
scaleDownUtilizationThreshold: 0.55
scaleDownUnneededTime: 7m
zeroOrMaxNodeScaling: true
e3dc24ca-df9d-429c-bcd5-be85f8d28710:
scaleDownGpuUtilizationThreshold: 0.25
ignoreDaemonSetsUtilization: true
Here 150da0a9-6ea6-4148-892b-965282e195b0 and e3dc24ca-df9d-429c-bcd5-be85f8d28710 are the unique identifiers (UUIDs) of the node groups in the cluster. You can view them in the control panel: in the top menu, click Products ⟶ Managed Kubernetes ⟶ Kubernetes section ⟶ cluster page ⟶ copy the UUID above the node group card, next to the pool segment.