Skip to main content
Configure node group autoscaling
Last update:

Configure node group autoscaling

After creating a Managed Kubernetes cluster, you can configure automatic scaling of node groups using Cluster Autoscaler. It helps to optimally utilize cluster resources — depending on the load on the cluster, the number of nodes in the group will automatically decrease or increase.

You can Enable node group autoscaling in the control panel, via Managed Kubernetes API or Terraform — you do not need to install Cluster Autoscaler in the cluster.

Managed Kubernetes uses Metrics Server to autoscale pods.

Working principle

The minimum and maximum number of nodes in a group can be set when autoscaling is enabled — Cluster Autoscaler will only change the number of nodes within these limits.

Cluster Autoscaler checks for pods (Pod) in PENDING status every 10 seconds and analyzes the load — requests for resources (vCPU and RAM) from pods. Depending on the results of the check, nodes are added or removed.

Adding node

If there is a sub in PENDING status and there are not enough free resources in the cluster to accommodate the pod, a node is added to the cluster.

If after creating one node there are still pods in PENDING status, then more nodes are added, one per validation cycle.

Deleting a node

If there are no pods in PENDING status, Cluster Autoscaler checks the number of resources that are requesting pods.

If the total number of resources requested by pods on a single node is less than 50% of its resources, Cluster Autoscaler marks the node as unnecessary. If a node does not see an increase in resource requests after 10 minutes, Cluster Autoscaler will check if pods can be moved to other nodes.

Cluster Autoscaler will not migrate pods (and therefore will not delete a node) with certain conditions:

  • Pods that use PodDisruptionBudget;
  • Kube-system pods without PodDisrptionBudget;
  • sub-sets that are created without a controller (Deployment, ReplicaSet, StatefulSet and others);
  • Pods that use local storage;
  • if the other nodes don't have resources for pod requests;
  • if there is a mismatch of nodeSelector, affinity/anti-affinity rules, and so on.

You can allow such submissions to carry over — add an annotation to do so:

cluster-autoscaler.kubernetes.io/safe-to-evict: "true".

If there are no restrictions, the pods will be moved and the low-loaded node will be removed. Nodes are removed one per inspection cycle.

Recommendations

For optimal performance of Cluster Autoscaler, we recommend:

  • make sure that the project has quotas for vCPU, RAM and disk capacity to create the maximum number of nodes in the group;
  • specify in manifests for resource requests pods;
  • check that nodes in the group have the same configuration and labels;
  • set PodDisruptionBudget for pods for which no stops are allowed. This will help avoid downtime when transferring between nodes;
  • do not use any other Cluster Autoscaler;
  • Do not manually modify node resources through the control panel. Cluster Autoscaler will ignore these changes and all new nodes will be created with the original configuration.

Enable autoscaling

When you have created a node group in a cluster, you can enable autoscaling for it. Autoscaling occurs only if the cluster is in ACTIVE status.

For your information

If you set the minimum number of nodes in a group to be greater than the current number, it will not increase to the lower limit immediately. The node group will scale only after the pods appear in PENDING status. It is the same with the upper limit of nodes in the group — if the current number of nodes is greater than the upper limit, deletion will start only after checking the pods.

  1. In Control Panel, go to Cloud PlatformKubernetes.
  2. Open the cluster page → Cluster Composition tab.
  3. From the menu () of the node group, select Auto Zoom Control.
  4. Check the Enable autoscaling checkbox. Set the minimum and maximum number of nodes in the group — the value of nodes will only change within this range.
  5. Click Save.