Skip to main content
Create a Managed Kubernetes cluster on a cloud server
Last update:

Create a Managed Kubernetes cluster on a cloud server

In a single project and pool, you can create a maximum of 10 fault-tolerant clusters and 10 basic Managed Kubernetes clusters on cloud servers.

  1. Configure the cluster.
  2. Configure the node group.
  3. Set up automation.

1. Configure the cluster

  1. In the dashboard, on the top menu, click Products and select Managed Kubernetes.

  2. Click Create Cluster.

  3. Enter a name for the cluster. The name will appear in the names of the cluster objects: node groups, nodes, balancers, networks, and disks. For example, if the cluster name is kelsie, the name of the node group would be kelsie-node-gdc8q and the boot disk would be kelsie-node-gdc8q-volume.

  4. Select the region and pool where the master nodes will reside. Once the cluster is created, the region and pool cannot be changed.

  5. Select the Kubernetes version. After the cluster is created, you can upgrade the Kubernetes version.

  6. Select the type of cluster:

    • fault-tolerant — Control Plane is placed on three master nodes that run on different hosts in different segments of the same pool. If one of the three master nodes is unavailable, Control Plane continues to run;
    • basic — Control Plane is hosted on a single master node that runs on a single host on a single pool segment. If the master node is unavailable, Control Plane will not run.

    Once a cluster is created, the cluster type cannot be changed.

  7. Optionally: to make the cluster available on private network and inaccessible from the Internet, check the Private kube API checkbox. By default the cluster is created in public network and it is automatically assigned public IP-address of kube API, accessible from the Internet. After cluster creation the type of access to kube API cannot be changed.

  8. Click Continue.

2. Configure the node group

  1. In the Server Type field, select Cloud Server.

  2. Select the pool segment where all worker nodes in the group will be located. Once the cluster is created, the pool segment cannot be changed.

  3. Configure the configuration of worker nodes in the group:

    3.1 Click Select Configuration and select the configuration of the worker nodes in the group:

    • arbitrary — any resource ratio can be specified;
    • or fixed with GPU — ready configurations of nodes with GPUs and with specified resource ratio.

    If the default configurations are not suitable, you can add a group of nodes with a fixed cloud server configuration via the Managed Kubernetes or Terraform API after the cluster is created.

    3.2 If you have selected an arbitrary configuration, specify the number of vCPUs, RAM, select the boot disk. Specify the disk size.

    3.3 If you have chosen fixed configuration with GPU, select a ready configuration of nodes with GPUs, boot disk and specify disk size. To install GPU drivers yourself, turn off the GPU Drivers toggle switch . By default, the GPU Drivers toggle switch is enabled and the cluster uses pre-installed drivers.

    3.4 Click Save.

  4. Configure the number of worker nodes. For fault-tolerant operation of system components, it is recommended to have at least two working nodes in the cluster, nodes can be in different groups:

    4.1 To have a fixed number of nodes in a node group, open the Fixed tab and specify the number of nodes.

    4.2 To use autoscaling in a node group — open the With autoscaling tab and set the minimum and maximum number of nodes in the group — the value of nodes will change only in this range. Autoscaling is not available for groups of nodes with GPUs without drivers.

  5. Optional: To make a node group interruptible, check the Interruptible node group checkbox . Interruptible node groups are available only in pool segments ru-7a and ru-7b.

  6. Optional: To add node group labels, open the Advanced Settings — Labels, Tints, User data block . In the Tags field, click Add. Enter the label key and value. Click Add.

  7. Optional: To add node group tints, open the Advanced Settings — Tags, tints, user data block . In the Tints field, click Add. Enter the key and value of the taint. Select an effect:

    • NoSchedule — new pods will not be added and existing pods will continue to run;
    • PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
    • NoExecute — running pods without tolerations will be removed.

    Click Add.

  8. Optional: To add a script with user parameters to configure a Managed Kubernetes cluster, open the Advanced Settings — Labels, Tints, User Data block. In the User Data field, paste the script. Examples of scripts and supported formats can be found in the User data instruction.

  9. Optional: To add an additional group of worker nodes to the cluster, click Add Node Group. You can create a cluster with groups of worker nodes in different segments of the same pool. This will increase fault tolerance and help maintain application availability if a failure occurs in one of the segments.

  10. In the Network block, specify a private subnet with no access from the Internet to which all nodes in the cluster will be joined.

    10.1 To create a private subnet, in the Subnet for nodes field, select New private subnet.

    A private network <cluster_name>-network, a private subnet and a router <cluster_name>-router will be automatically created, where <cluster_name> is the cluster name. CIDR is assigned automatically.

    If you enable port security, the default security group will be assigned to node ports . Do not change the rules in it or assign a different security group. This will prevent the cluster from failing. Traffic filtering is enabled by default on private networks that are created:

    • in the ru-8 pool after May 15, 2025;
    • in the uz-2 pool after May 22, 2025;
    • in the ru-9 pool after May 26, 2025;
    • in the ke-1 pool after May 26, 2025;
    • in the uz-1 pool after May 27, 2025;
    • in the kz-1 pool after May 28, 2025;
    • in the gis-1 pool after May 29, 2025.

    10.2 If a private subnet is created, select an existing subnet in the Subnet for nodes field. The subnet must meet the conditions:

  11. Click Continue.

3. Set up automation

  1. Optional: To enable auto-recovery of nodes, check the Recover nodes checkbox . If the cluster has only one working node, auto-recovery is not available.

  2. Optional: To enable auto-update of patch versions, check the Install patch versions checkbox . If the cluster has only one working node, Kubernetes patch auto-update is not available.

  3. Select the cluster maintenance start time — the time when automatic cluster maintenance actions will start.

  4. Optional: To enable audit logs, check the Audit Logs checkbox . After creating the cluster , configure integration with the log storage and analysis system.

  5. Check the price of the cluster on the cloud server.

  6. Click Create. Creating a cluster takes a few minutes, during which time the cluster will be in the CREATING status. The cluster will be ready for operation when it moves to the ACTIVE status.