Skip to main content
Managed Kubernetes: a quick start
Last update:

Managed Kubernetes: a quick start

You can work with a Managed Kubernetes cluster in the control panels through API Managed Kubernetes or Terraform.

  1. Create a cluster in the control panel.
  2. Connect to the cluster.
  3. Customize Ingress.

Learn more about infrastructure planning and application placement in Managed Kubernetes in this blog article Cloud-native in Kubernetes.

Create a cluster on the cloud server in the control panel

  1. Set up a cluster on a cloud server.
  2. Configure the node group.
  3. Set up automation.

Set up a cluster on a cloud server

  1. В control panels go to Cloud platformKubernetes.

  2. Click Create a cluster.

  3. Enter a name for the cluster. The name will appear in the names of the cluster objects: node groups, nodes, balancers, networks, and disks. For example, if the cluster name is kelsiethen the name of the node group will be kelsie-node-gdc8qand the boot disk kelsie-node-gdc8q-volume.

  4. Select region and pool. Once a cluster is created, the pool cannot be changed.

  5. Select a version of Kubernetes. Once the cluster is created, you can update the Kubernetes version.

  6. Select cluster type:

    • fault-tolerant — Control Plane is placed on three master nodes that run on different hosts in different segments of the same pool. If one of the three master nodes is unavailable, Control Plane continues to run;
    • basic — Control Plane is hosted on a single master node that runs on a single host on a single pool segment. If the master node is unavailable, Control Plane will not run.

    Once a cluster is created, the cluster type cannot be changed.

  7. Optional: to make the cluster accessible over a private network and inaccessible from the Internet, check the checkbox Private kube API. By default, the cluster is created in a public network and it is automatically assigned a public IP-address of kube API, accessible from the Internet. After cluster creation the type of access to kube API cannot be changed.

  8. In the block Network select a private subnet with no Internet access to which all nodes in the cluster will be joined.

    To create a private subnet, in the field Subnet for nodes select New private subnet. A private network will be automatically created cluster_name-network, private subnet and router. <cluster_name>-routerwhere cluster_name — cluster name. The CIDR is assigned automatically.

    If a private subnet is created, in the field Subnet for nodes select an existing subnet. The subnet must meet the conditions:

    • the subnet should be connected to a cloud router;
    • subnet must not overlap with the ranges 10.250.0.0.0/16, 10.10.0.0.0/16, and 10.96.0.0.0/12. These ranges participate in the internal addressing of Managed Kubernetes;
    • DHCP must be disabled on the subnet.
  9. Click Continue.

Configure the node group

  1. In the field Server type select Cloud server.

  2. Select pool segment The cluster segment will contain all worker nodes in the group. Once a cluster is created, the pool segment cannot be changed.

  3. Click Select configuration and select the configuration of worker nodes in the group:

    • arbitrary — any resource ratio can be specified;
    • or fixed with GPU — ready configurations of nodes with graphic processors and with specified resource ratio.

    If the default configurations are not suitable, once the cluster is created, you can Add a node group with a fixed cloud server configuration via the Managed Kubernetes API or Terraform.

    3.1 If you have selected an arbitrary configuration, specify the number of vCPUs, RAM, select the boot disk. Specify the size of the disk.

    3.2 If you selected a fixed configuration with GPUs, select a ready configuration of nodes with GPUs, boot disk and specify the size of the disk. To Install GPU drivers yourself turn off the toggle switch GPU drivers. Default toggle switch GPU drivers is enabled and the cluster uses pre-installed drivers.

    3.3. Press Save.

  4. Specify the number of working nodes in the group.

  5. Optional: to enable autoscaling and check the box Auto scaling of a group of nodes. Set the minimum and maximum number of nodes in the group — the value of nodes will only change within this range.

  6. Optional: to add node group tags open the block Additional settings — tags, tints, user data. In the field Tags click Add. Enter the key and the label value. Press Add.

  7. Optional: to add node group teints open the block Additional settings — tags, tints, user data. In the field Taints click Add. Enter the key and the value of theint. Select the effect:

    • NoSchedule — new pods will not be added and existing pods will continue to run;
    • PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
    • NoExecute — running pods without tolerations will be removed.

    Click Add.

  8. Optional: to add a script with custom parameters to configure a Managed Kubernetes cluster, open the block Additional settings — tags, tints, user data. In the field User Data insert a script. Examples of scripts and supported formats can be found in the manual User data.

  9. Optional: to add an additional group of worker nodes to the cluster, click Add a node group. You can create a cluster with groups of worker nodes in different segments of the same pool. This will increase fault tolerance and help maintain application availability if a failure occurs in one of the segments.

  10. In the block Network select a private subnet with no Internet access to which all nodes in the cluster will be joined.

    To create a private subnet, in the field Subnet for nodes select New private subnet. A private network will be automatically created cluster_name-network, private subnet and router. <cluster_name>-routerwhere cluster_name — cluster name. The CIDR is assigned automatically.

    If a private subnet is created, in the field Subnet for nodes select an existing subnet. The subnet must meet the conditions:

    • the subnet should be connected to a cloud router;
    • subnet must not overlap with the ranges 10.250.0.0.0/16, 10.10.0.0.0/16, and 10.96.0.0.0/12. These ranges participate in the internal addressing of Managed Kubernetes;
    • DHCP must be disabled on the subnet.
  11. Click Continue.

set up automation

  1. Optional: to enable node auto-recovery and check the box Restore nodes. If the cluster has only one working node, auto-recovery is not available.

  2. Optional: to enable auto-update of patch versions and check the box Install patch versions. If the cluster has only one working node, Kubernetes patch auto-update is not available.

  3. Select service window cluster — the time at which automatic cluster maintenance actions will take place.

  4. Click Create. Creating a cluster takes a few minutes, during which time the cluster will be in status CREATING. The cluster will be ready for operation when it moves to status ACTIVE.

Connect to the cluster

To start working with the cluster, you need to configure kubectl.

For your information

We recommend that all actions with nodes, balancers and disks in the cluster be done only through kubectl.

After certificate updates for system components you must reconnect to the cluster.

  1. Install the Kubernetes kubectl console client by official instruction.

  2. В control panels go to Cloud platformKubernetes.

  3. Open the cluster page → tab Settings.

  4. If you use private kube API, check access to it. IP address is specified in the field Kube API.

  5. Click Download kubeconfig. Downloading the kubeconfig file is not available if the cluster status is PENDING_CREATE, PENDING_ROTATE_CERTS, PENDING_DELETE or ERROR.

  6. Export to an environment variable KUBECONFIG path to the kubeconfig file:

    export KUBECONFIG=<path>

    Specify <path> — path to the kubeconfig file имя_кластера.yaml.

  7. Check if the configuration is correct — access the cluster via kubectl:

    kubectl get nodes

    Nodes must be in status Ready.

Customize Ingress

Create Ingress and Ingress Controller To organize incoming traffic for the cluster.