Add a node group in a Managed Kubernetes cluster
You can add a cloud or dedicated server node group to a Managed Kubernetes cluster. Node groups on a dedicated server and node groups on a cloud server cannot be used in the same cluster at the same time.
For more information about configurations, see the Managed Kubernetes Node Configurations instructions.
Add a node group on the cloud server
Control panel
API
Terraform
If the configurations in the control panel are not suitable, you can create a group of nodes with a fixed configuration (flavor) of cloud servers via the Managed Kubernetes or Terraform API.
-
In the dashboard, on the top menu, click Products and select Managed Kubernetes.
-
Open the Cluster page → Cluster Composition tab.
-
Click Add Node Group.
-
Select the pool segment in which all worker nodes in the group will be located. Once a node group is added, the pool segment cannot be changed.
-
Configure the configuration of worker nodes in the group:
5.1 Click Select Configuration and select the configuration of the worker nodes in the group:
- arbitrary — any resource ratio can be specified;
- or fixed with GPU — ready configurations of nodes with GPUs and with specified resource ratio.
If the default configurations are not suitable, you can add a group of nodes with a fixed cloud server configuration via the Managed Kubernetes or Terraform API after the cluster is created.
5.2 If you have selected an arbitrary configuration, specify the number of vCPUs, RAM, select the boot disk. Specify the disk size.
5.3. If you have chosen a fixed configuration with GPU, select a ready configuration of nodes with GPUs, boot disk and specify the disk size. To install GPU drivers yourself, turn off the GPU Drivers toggle switch . By default, the GPU Drivers toggle switch is enabled and the cluster uses pre-installed drivers.
5.4 Click Save.
-
Configure the number of worker nodes. For fault-tolerant operation of system components, it is recommended to have at least two working nodes in the cluster, nodes can be in different groups:
6.1 To have a fixed number of nodes in a node group, open the Fixed tab and specify the number of nodes.
6.2 To use autoscaling in a node group, open the With autoscaling tab and set the minimum and maximum number of nodes in the group — the value of nodes will change only in this range. Autoscaling is not available for node groups with GPUs without drivers.
-
Optional: To make a node group interruptible, check the Interruptible node group checkbox . Interruptible node groups are available only in pool segments ru-7a and ru-7b.
-
Optional: To add node group labels, open the Advanced Settings — Labels, Tints, User data block . In the Tags field, click Add. Specify the label key and value. Click Add.
-
Optional: To add node group tints, open the Advanced Settings — Tags, tints, user data block . In the Tints field, click Add. Specify the key and value of the taint. Select an effect:
- NoSchedule — new pods will not be added and existing pods will continue to run;
- PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
- NoExecute — running pods without tolerations will be removed.
Click Add.
-
Optional: To add a script with user parameters to configure a Managed Kubernetes cluster, open the Advanced Settings — Labels, Tints, User Data block. In the User Data field, paste the script. The maximum size of a script with data that is not Base64 encoded is 47 KB. Examples of scripts and supported formats can be found in the User data instruction.
-
Click Add Node Group. You can view all created nodes in the Control Panel: from the top menu, click Products and select Cloud Servers.
- Look at the list of fixed-configuration flavors in a particular pool.
- Copy the value from the
ID
column. - Using the Managed Kubernetes API methods, create a cluster with a node group with the desired configuration or add a node group to an existing cluster. In the query for the
flavor_id
parameter, specify the value of the fixed configuration ID that you copied in step 2.
Use the instructions in the Terraform documentation:
View a list of flavors in a specific pool
The flavors correspond to cloud server configurations and define the number of vCPUs, RAM and local disk size (optional) of the server. You can view all ready-made cloud server flavors and flavors created on request.
-
Check out the list of available flavors:
openstack flavor list
Example answer for pool ru-9 (abbreviated):
+------------+-----------------------+--------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+------------+-----------------------+--------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 0 | 0 | 1 | True |
| 1011 | SL1.1-1024 | 1024 | 0 | 0 | 1 | True |
| 2011 | CPU1.4-8192 | 8192 | 0 | 0 | 4 | True |
| 4011 | RAM1.2-16384 | 16384 | 0 | 0 | 2 | True |
| 3021 | GL2.6-24576-0-1GPU | 24576 | 0 | 0 | 6 | True |
| 9011 | PRC10.1-512 | 512 | 0 | 0 | 1 | True |
| 9021 | PRC20.1-512 | 512 | 0 | 0 | 1 | True |
| 9051 | PRC50.1-512 | 512 | 0 | 0 | 1 | True |
| 8301 | HFL1.1-2048-30 | 2048 | 30 | 0 | 1 | True |
+------------+-----------------------+--------+------+-----------+-------+-----------+Here:
-
ID
— The flavor ID of the cloud server; -
Name
— The name of the flavor that corresponds to the configuration:m1.XX
— OpenStack base configurations, similar to arbitrary configurations;SL1.XX
— fixed configurations of the Standard Line;CPU1.XX
— fixed configurations of the CPU Line;RAM1.XX
— Memory Line fixed configurations;GL2.XX.
— GPU Line fixed configurations;PRC10.XX
— Shared Line fixed configurations with vCPU share of 10%;PRC20.XX
— Shared Line fixed configurations with vCPU share of 20%;PRC50.XX
— Shared Line fixed configurations with vCPU share of 50%;HFL1.XX
— fixed configurations of the HighFreq Line;
-
RAM
— RAM size in MB; -
Disk
— is the size of the local disk in GB; -
vCPUs
— number of vCPUs; -
Is Public
— flavor scope:True
— public ready flavors;False
— private flavors.
-
Add a group of nodes on a dedicated server
-
In the dashboard, on the top menu, click Products and select Managed Kubernetes.
-
Open the Cluster page → Cluster Composition tab.
-
Click Add Node Group.
-
Select the pool in which all worker nodes in the group will be located. The worker nodes must be in a pool from the same accessibility zone as the master nodes. Once a node group has been created, the pool cannot be changed.
-
Configure the configuration of worker nodes in the group:
5.1 Click Select Configuration.
5.2 Select a tariff plan.
5.3 Select a ready-made dedicated server configuration.
5.4. Press Select.
Once the cluster is created, the node configuration cannot be changed.
-
Configure the number of worker nodes:
6.1 Open the Fixed tab.
6.2 Specify the number of nodes. The minimum number of nodes is one. For fault-tolerant operation of system components it is recommended to have at least two working nodes in the cluster, nodes can be in different groups.
-
Optional: To add node group labels, in the Labels field, click Add. Enter the label key and value. Click Add. After you create a node group, you cannot create new labels, modify existing labels, or delete labels.
-
Optional: To add node group tints, in the Tints field, click Add. Enter the key and value of the taint. Select an effect:
- NoSchedule — new pods will not be added and existing pods will continue to run;
- PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
- NoExecute — running pods without tolerations will be removed.
Click Add.
Once a node group is created, you cannot create new teints, modify existing teints, or delete teints.
-
Click Add. You can view all created nodes in the control panel: in the top menu, click Products → Dedicated Servers.