Create a Managed Kubernetes cluster on a cloud server
A maximum of 10 fault-tolerant clusters and 10 basic Managed Kubernetes clusters on cloud servers can be created in a single project and pool.
Control panel
Terraform
Configure the cluster
-
In control panels go to Cloud platform → Kubernetes.
-
Click Create a cluster.
-
Enter a name for the cluster. The name will appear in the names of the cluster objects: node groups, nodes, balancers, networks, and disks. For example, if the cluster name is
kelsie
then the name of the node group will bekelsie-node-gdc8q
and the boot diskkelsie-node-gdc8q-volume
. -
Select region and pool The region and pool cannot be changed once the cluster is created. Once a cluster is created, the region and pool cannot be changed.
-
Select a version of Kubernetes. Once the cluster is created, you can update the Kubernetes version.
-
Select cluster type:
- fault-tolerant — Control Plane is placed on three master nodes that run on different hosts in different segments of the same pool. If one of the three master nodes is unavailable, Control Plane continues to run;
- basic — Control Plane is hosted on a single master node that runs on a single host on a single pool segment. If the master node is unavailable, Control Plane will not run.
Once a cluster is created, the cluster type cannot be changed.
-
Optional: to make the cluster accessible over a private network and inaccessible from the Internet, check the checkbox Private kube API. By default, the cluster is created in a public network and it is automatically assigned a public IP-address of kube API, accessible from the Internet. After cluster creation the type of access to kube API cannot be changed.
-
Click Continue.
Configure the node group
-
In the field Server type select Cloud server.
-
Select pool segment The cluster segment will contain all worker nodes in the group. Once a cluster is created, the pool segment cannot be changed.
-
Click Select configuration and select the configuration of worker nodes in the group:
- arbitrary — any resource ratio can be specified;
- or fixed with GPU — ready configurations of nodes with graphic processors and with specified resource ratio.
If the default configurations are not suitable, once the cluster is created, you can Add a node group with a fixed cloud server configuration via the Managed Kubernetes or Terraform APIs.
3.1 If you have selected an arbitrary configuration, specify the number of vCPUs, RAM, select the boot disk. Specify the size of the disk.
3.2 If you selected a fixed configuration with GPUs, select a ready configuration of nodes with GPUs, boot disk and specify the size of the disk. To Install GPU drivers yourself turn off the toggle switch GPU drivers. Default toggle switch GPU drivers is enabled and the cluster uses pre-installed drivers.
3.3. Press Save.
-
Specify the number of working nodes in the group.
-
Optional: to make a group of nodes interrupted and check the box Interrupted node group. Interruptible node groups are available only in pool segments ru-7a and ru-7b.
-
Optional: to enable autoscaling and check the box Auto scaling of a group of nodes. Set the minimum and maximum number of nodes in the group — the value of nodes will change only in this range. Autoscaling is not available for node groups with GPUs without drivers.
-
Optional: to add node group tags open the block Additional settings — tags, tints, user data. In the field Tags click Add. Enter the key and the label value. Press Add.
-
Optional: to add node group teints open the block Additional settings — tags, tints, user data. In the field Taints click Add. Enter the key and the value of theint. Select the effect:
- NoSchedule — new pods will not be added and existing pods will continue to run;
- PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
- NoExecute — running pods without tolerations will be removed.
Click Add.
-
Optional: to add a script with custom parameters to configure a Managed Kubernetes cluster, open the block Additional settings — tags, tints, user data. In the field User Data insert a script. Examples of scripts and supported formats can be found in the manual User data.
-
Optional: to add an additional group of worker nodes to the cluster, click Add a node group. You can create a cluster with groups of worker nodes in different segments of the same pool. This will increase fault tolerance and help maintain application availability if a failure occurs in one of the segments.
-
In the block Network select a private subnet with no Internet access to which all nodes in the cluster will be joined.
To create a private subnet, in the field Subnet for nodes select New private subnet. A private network will be automatically created
cluster_name-network
, private subnet and router.<cluster_name>-router
wherecluster_name
— cluster name. The CIDR is assigned automatically.If a private subnet is created, in the field Subnet for nodes select an existing subnet. The subnet must meet the conditions:
- the subnet should be connected to a cloud router;
- subnet must not overlap with the ranges 10.250.0.0.0/16, 10.10.0.0.0/16, and 10.96.0.0.0/12. These ranges participate in the internal addressing of Managed Kubernetes;
- DHCP must be disabled on the subnet.
-
Click Continue.
set up automation
-
Optional: to enable node auto-recovery and check the box Restore nodes. If the cluster has only one working node, auto-recovery is not available.
-
Optional: to enable auto-update of patch versions and check the box Install patch versions. If the cluster has only one working node, Kubernetes patch auto-update is not available.
-
Select service start time cluster — the time when automatic cluster maintenance actions will start.
-
Optional: to enable audit logs and check the box Audit logs. After the cluster is created set up integration with the log storage and analysis system.
-
Check the price of the cluster on the cloud server.
-
Click Create. Creating a cluster takes a few minutes, during which time the cluster will be in the status
CREATING
. The cluster will be ready for operation when it moves to statusACTIVE
.
Use the instructions in the Terraform documentation: