Add a node group in a Managed Kubernetes cluster
Adding a node group is not available in Managed Kubernetes clusters on a dedicated server.
In a Managed Kubernetes cluster, you can add a group of nodes to a Managed Kubernetes cluster on a cloud server.
More information about configurations in the manual Managed Kubernetes node configurations.
All created nodes are displayed in control panels in the section Cloud platform → Servers.
Add a node group on the cloud server
Control panel
API
Terraform
If the configurations in the control panel are not suitable, you can create a node group with a fixed configuration (flavor) of cloud servers via the Managed Kubernetes API or Terraform.
-
В control panels go to Cloud platform → Kubernetes.
-
Open the cluster page → tab Cluster composition.
-
Click Add a node group.
-
Select pool segment The node pool segment will contain all working nodes in the group. After adding a node group, the pool segment cannot be changed.
-
Click Select configuration and select the configuration of worker nodes in the group:
- arbitrary — any resource ratio can be specified;
- or fixed with GPU — ready configurations of nodes with graphic processors and with specified resource ratio.
If the default configurations are not suitable, you can add a group of nodes with a fixed cloud server configuration via the Managed Kubernetes API after the cluster is created or Terraform.
5.1 If you have selected an arbitrary configuration, specify the number of vCPUs, RAM, select the boot disk. Specify the size of the disk.
5.2 If you have selected a fixed configuration with GPUs, select a ready configuration of nodes with GPUs, boot disk and specify the size of the disk. To Install GPU drivers yourself turn off the toggle switch GPU drivers. Default toggle switch GPU drivers is enabled and the cluster uses pre-installed drivers.
5.3. Press Save.
-
Specify the number of working nodes in the group.
-
Optional: to make a node group preemptible, check the box Preemptible node group. Preemptible node groups are available only in pool segments ru-7a and ru-7b.
-
Optional: to enable autoscaling and check the box Auto scaling of a group of nodes. Set the minimum and maximum number of nodes in the group — the value of nodes will change only in this range. Autoscaling is not available for node groups with GPUs without drivers.
-
Optional: to add node group tags open the block Additional settings — tags, tints, user data. In the field Tags click Add. Specify a key and a label value. Press Add.
-
Optional: to add node group teints open the block Additional settings — tags, tints, user data. In the field Taints click Add. Specify the key and tint value. Select the effect:
- NoSchedule — new pods will not be added and existing pods will continue to run;
- PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
- NoExecute — running pods without tolerations will be removed.
Click Add.
-
Optional: to add a script with custom parameters to configure a Managed Kubernetes cluster, open the block Additional settings — tags, tints, user data. In the field User Data insert the script. The maximum size of a script with non-Base64 encoded data is 47 KB. Examples of scripts and supported formats can be found in the manual User data.
-
Click Add a node group.
- Look at the list of fixed-configuration flavors in a particular pool.
- Copy the value from the column
ID
. - Use the Managed Kubernetes API methods to create a cluster with a node group with the desired configuration or add a node group to an existing cluster. In the request for the parameter
flavor_id
specify the value of the fixed configuration ID that you copied in step 2.
Use the instructions in the Terraform documentation:
View a list of flavors in a specific pool
The flavors correspond to cloud server configurations and define the number of vCPUs, RAM and local disk size (optional) of the server. You can view all ready-made cloud server flavors and flavors created on request.
-
Check out the list of available flavors:
openstack flavor list
Example answer for pool ru-9 (abbreviated):
+------------+-----------------------+--------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+------------+-----------------------+--------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 0 | 0 | 1 | True |
| 1011 | SL1.1-1024 | 1024 | 0 | 0 | 1 | True |
| 2011 | CPU1.4-8192 | 8192 | 0 | 0 | 4 | True |
| 4011 | RAM1.2-16384 | 16384 | 0 | 0 | 2 | True |
| 3021 | GL2.6-24576-0-1GPU | 24576 | 0 | 0 | 6 | True |
| 9011 | PRC10.1-512 | 512 | 0 | 0 | 1 | True |
| 9021 | PRC20.1-512 | 512 | 0 | 0 | 1 | True |
| 9051 | PRC50.1-512 | 512 | 0 | 0 | 1 | True |
| 8301 | HFL1.1-2048-30 | 2048 | 30 | 0 | 1 | True |
+------------+-----------------------+--------+------+-----------+-------+-----------+Here:
-
ID
— Cloud Server Flavor ID; -
Name
— The name of the flavor that matches the configuration:m1.XX
— OpenStack base configurations, similar to arbitrary configurations;SL1.XX
— fixed configurations of the Standard Line;CPU1.XX
— fixed configurations of the CPU Line;RAM1.XX
— Memory Line fixed configurations;GL2.XX
— GPU Line fixed configurations;PRC10.XX
— Shared Line fixed configurations with vCPU share of 10%;PRC20.XX
— Shared Line fixed configurations with vCPU share of 20%;PRC50.XX
— Shared Line fixed configurations with vCPU share of 50%;HFL1.XX
— fixed configurations of the HighFreq Line;SGX1.XX
— SGX Line fixed configurations;
-
RAM
— RAM size in MB; -
Disk
— local disk size in GB; -
VCPUs
— number of vCPUs; -
Is Public
— flavor scope:True
— public ready flavors;False
— private flavors.
-