Skip to main content
Managed Kubernetes: a quick start
Last update:

Managed Kubernetes: a quick start

You can work with the Managed Kubernetes cluster in dashboard, via Managed Kubernetes API or Terraform.

  1. Create cluster in control-panel.
  2. Connect to cluster.
  3. Set up Ingress.

For more on infrastructure planning and application placement in Managed Kubernetes, see the blog article Cloud-native in Kubernetes.

Create a cluster in the control panel

  1. In Control Panel, go to Cloud PlatformKubernetes.

  2. Click Create Cluster. A maximum of two Managed Kubernetes clusters can be created in the same project and pool.

  3. Enter the name of the cluster. The name will appear in the names of the cluster objects: node groups, nodes, balancers, networks, and disks. For example, if the cluster name is kelsie, the node group name would be kelsie-node-gdc8q and the boot disk name would be kelsie-node-gdc8q-volume.

  4. Select region and pool. It will not be possible to change the pool in a created cluster.

  5. Select the version of Kubernetes. The created cluster will be able to upgrade version Kubernetes.

  6. Select cluster type:

    • fault-tolerant — three master nodes are created distributed on different hosts in segments of the same pool. Control Plane will continue to operate if one of the three master nodes becomes unavailable;
    • basic — one master node is created in one segment of the pool. Control Plane will not be available if there is a malfunction on the master node.

    It will not be possible to change the cluster type in a created cluster.

  7. Optional: To keep the cluster on a private network and not accessible from the Internet, check the Private kube API checkbox. By default, the cluster is created in a public network and it is automatically assigned a public IP address of kube API, accessible from the Internet. In the created cluster it will be impossible to change the type of access to kube API.

  8. Select or create a private subnet to which all nodes in the cluster will be joined. If you create a new subnet, the CIDR is assigned automatically. The subnetwork must be connected to the cloud router.

  9. Press Continue.

  10. Create the first group of worker nodes. Select the pool segment where all worker nodes in the group will be located.

  11. Press Select Configuration.

  12. Select the configuration of the worker nodes in the group:

    If the default configurations are not suitable, create a cluster with node-group-with-prebuilt-cloud-server-configuration via the Managed Kubernetes or Terraform API.

  13. Click Save.

  14. Specify the number of working nodes in the group.

  15. Optional: add node group labels — they help to distinguish working nodes of one group from working nodes of another group when working through kubectl. Specify the key and value of the label. Press Add.

  16. Optional: add node group taints — these are tags that indicate where pods should not be placed. Specify the key and tint value, and select the effect:

    • NoSchedule — new pods will not be added and existing pods will continue to run;
    • PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
    • NoExecute — running pods without tolerations will be removed.

    Press Add.

  17. Optional: add script user data — user parameters to configure Managed Kubernetes cluster in cloud-config format or in bash script. The maximum size of a script with data that is not Base64 encoded is 47 KB.

  18. Optional: To add an additional group of worker nodes to the cluster, click Add Node Group. You can create a cluster with groups of worker nodes in different segments of the same pool. This will increase fault tolerance and help keep the application available if a crash occurs in one of the segments.

  19. Press Continue.

  20. Optional: to enable auto-reinstall nodes, check the Reinstall nodes checkbox. If the cluster has only one working node, auto-recovery is not available.

  21. Optional: to enable auto-upgrade patch versions, check the Install patch versions checkbox. If the cluster has only one working node, Kubernetes patch auto-update is not available.

  22. Select maintenance window of the cluster — the time at which automatic cluster maintenance actions will occur.

  23. Press Create. Creating a cluster takes a few minutes, during which time the cluster will be in CREATING status. The cluster will be ready for operation when it moves to the ACTIVE status.

Connect to the cluster

To get started with the cluster, you must configure kubectl.

For your information

We recommend that all actions with nodes, balancers and disks in the cluster be done only through kubectl.

After update certificates for system components, you must reconnect to the cluster.

  1. Install the Kubernetes kubectl console client according to official instructions.

  2. In Control Panel, go to Cloud PlatformKubernetes.

  3. Open the cluster page → Settings tab.

  4. If you are using a private kube API, check the access to it. The IP address is specified in the Kube API field.

  5. Click Download kubeconfig.

  6. Export to the KUBECONFIG environment variable the path to the kubeconfig file:

    export KUBECONFIG=<path>

    Specify <path> is the path to the kubeconfig file cluster_name.yaml.

  7. Check if the configuration is correct — access the cluster via kubectl:

    kubectl get nodes

    Nodes must be in Ready status.

Customize Ingress

Create Ingress and Ingress Controller to organize inbound traffic for the cluster.