Create a fault-tolerant Managed Kubernetes cluster
We recommend create resources in order. If you create all the resources that are described in the configuration file The Terraform creates resources regardless of the order in which they are listed in the file.
- Optional: configure your ISPs.
- Create a private network and subnet.
- Create a cloud router connected to an external network.
- Create a fault-tolerant cluster.
- Create a node group with a network drive.
Configuration files
Example file for configuring providers
terraform {
required_providers {
selectel = {
source = "selectel/selectel"
version = "6.0.0"
}
openstack = {
source = "terraform-provider-openstack/openstack"
version = "2.1.0"
}
}
}
provider "selectel" {
domain_name = "123456"
username = "user"
password = "password"
auth_region = "pool"
auth_url = "https://cloud.api.selcloud.ru/identity/v3/"
}
resource "selectel_vpc_project_v2" "project_1" {
name = "project"
}
resource "selectel_iam_serviceuser_v1" "serviceuser_1" {
name = "username"
password = "password"
role {
role_name = "member"
scope = "project"
project_id = selectel_vpc_project_v2.project_1.id
}
}
provider "openstack" {
auth_url = "https://cloud.api.selcloud.ru/identity/v3"
domain_name = "123456"
tenant_id = selectel_vpc_project_v2.project_1.id
user_name = selectel_iam_serviceuser_v1.serviceuser_1.name
password = selectel_iam_serviceuser_v1.serviceuser_1.password
region = "ru-9"
}
Example file for a fault-tolerant cluster with nodes of arbitrary configuration
resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "subnet_1" {
name = "private-subnet"
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}
data "openstack_networking_network_v2" "external_network_1" {
external = true
}
resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}
resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}
data "selectel_mks_kube_versions_v1" "versions" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
}
resource "selectel_mks_cluster_v1" "cluster_1" {
name = "high_availability_cluster"
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
kube_version = data.selectel_mks_kube_versions_v1.versions.latest_version
network_id = openstack_networking_network_v2.network_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
maintenance_window_start = "00:00:00"
}
resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
cluster_id = selectel_mks_cluster_v1.cluster_1.id
project_id = selectel_mks_cluster_v1.cluster_1.project_id
region = selectel_mks_cluster_v1.cluster_1.region
availability_zone = "ru-9a"
nodes_count = "2"
cpus = 2
ram_mb = 4096
volume_gb = 32
volume_type = "fast.ru-9a"
labels = {
"label-key0": "label-value0",
"label-key1": "label-value1",
"label-key2": "label-value2",
}
}
Example file for a fault-tolerant cluster with fixed configuration nodes (flavors)
resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "subnet_1" {
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}
data "openstack_networking_network_v2" "external_network_1" {
external = true
}
resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}
resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}
data "selectel_mks_kube_versions_v1" "versions" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
}
resource "selectel_mks_cluster_v1" "cluster_1" {
name = "high_availability_cluster"
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
kube_version = data.selectel_mks_kube_versions_v1.versions.latest_version
network_id = openstack_networking_network_v2.network_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
maintenance_window_start = "00:00:00"
}
resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
cluster_id = selectel_mks_cluster_v1.cluster_1.id
project_id = selectel_mks_cluster_v1.cluster_1.project_id
region = selectel_mks_cluster_v1.cluster_1.region
availability_zone = "ru-9a"
nodes_count = "2"
flavor_id = "1011"
volume_gb = 32
volume_type = "fast.ru-9a"
labels = {
"label-key0": "label-value0",
"label-key1": "label-value1",
"label-key2": "label-value2",
}
}
optional: configure providers
If you're set up the ISPs Selectel and OpenStack, skip this step.
-
Make sure that in the control panel you created a service user with the Account Administrator and User Administrator roles.
-
Create a directory to store the configuration files and a separate file with the extension
.tf
to configure the ISPs. -
Add Selectel and OpenStack providers to the file to configure the providers:
terraform {
required_providers {
selectel = {
source = "selectel/selectel"
version = "6.0.0"
}
openstack = {
source = "terraform-provider-openstack/openstack"
version = "2.1.0"
}
}
}Here
version
— provider versions. The current version can be found in the Selectel documentation (in the Terraform Registry and GitHub) and OpenStack (in Terraform Registry and GitHub).Read more about products, services and services that can be managed with providers in the instructions Selectel and OpenStack providers.
-
Initialize the Selectel provider:
provider "selectel" {
domain_name = "123456"
username = "user"
password = "password"
auth_region = "pool"
auth_url = "https://cloud.api.selcloud.ru/identity/v3/"
}Here:
domain_name
— Selectel account number. You can look in control panels in the upper right-hand corner;username
— name service user with the Account Administrator and User Administrator roles. You can look in control panels: section Identity & Access Management → User management → tab Service users (the section is only available to the Account Owner and User Administrator);password
— service user password. You can view it when creating a user or change to a new one.
-
Create a project:
resource "selectel_vpc_project_v2" "project_1" {
name = "project"
}Check out the detailed description of the resource selectel_vpc_project_v2.
-
Create a service user to access the project and assign the Project Administrator role to it:
resource "selectel_iam_serviceuser_v1" "serviceuser_1" {
name = "username"
password = "password"
role {
role_name = "member"
scope = "project"
project_id = selectel_vpc_project_v2.project_1.id
}
}Here:
username
— username;password
— user password. The password must be no shorter than eight characters and contain Latin letters of different cases and digits;project_id
— Project ID. You can look in control panels: section Cloud platform → open the project menu (name of the current project) → in the line of the desired project, click .
Check out the detailed description of the resource selectel_iam_serviceuser_v1.
-
Initialize the OpenStack provider:
provider "openstack" {
auth_url = "https://cloud.api.selcloud.ru/identity/v3"
domain_name = "123456"
tenant_id = selectel_vpc_project_v2.project_1.id
user_name = selectel_iam_serviceuser_v1.serviceuser_1.name
password = selectel_iam_serviceuser_v1.serviceuser_1.password
region = "ru-9"
}Here:
domain_name
— Selectel account number. You can look in control panels in the upper right-hand corner;region
— pool for exampleru-9
. All resources will be created in this pool. The list of available pools can be found in the instructions Availability matrices.
-
If at the same time you are setting up your providers resource creation then for OpenStack resources add the argument
depends_on
. For example, for the openstack_networking_network_v2 resource:resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
depends_on = [
selectel_vpc_project_v2.project_1,
selectel_iam_serviceuser_v1.serviceuser_1
]
} -
Optional: if you want to use a mirror, create a separate Terraform CLI configuration file and add a block to it:
provider_installation {
network_mirror {
url = "https://tf-proxy.selectel.ru/mirror/v1/"
include = ["registry.terraform.io/*/*"]
}
direct {
exclude = ["registry.terraform.io/*/*"]
}
}Read more about mirror settings in the manual CLI Configuration File HashiCorp documentation.
-
Open the CLI.
-
Initialize the Terraform configuration in the directory:
terraform init
-
Check that the configuration files have been compiled without errors:
terraform validate
-
Format the configuration files:
terraform fmt
-
Check the resources that will be created:
terraform plan
-
Apply the changes and create the resources:
terraform apply
-
Confirm creation — enter yes and press Enter. The created resources are displayed in the control panel.
-
If there were insufficient quotas to create resources, increase quotas.
Create a private network and subnet
resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "subnet_1" {
name = "private-subnet"
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
dns_nameservers = ["188.93.16.19", "188.93.17.19"]
enable_dhcp = false
}
Here:
cidr
— CIDR of a private subnet, e.g.192.168.199.0/24
;dns_nameservers
— DNS servers, such as DNS Selectel188.93.16.19
and188.93.17.19
.
See a detailed description of the resources:
Create a cloud router connected to an external network
A cloud router connected to an external network acts as a 1:1 NAT for access from a private network to the Internet through the public IP address of the router.
data "openstack_networking_network_v2" "external_network_1" {
external = true
}
resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}
resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}
See a detailed description of the resources:
- openstack_networking_network_v2;
- openstack_networking_router_v2;
- openstack_networking_router_interface_v2.
Create a fault-tolerant cluster
data "selectel_mks_kube_versions_v1" "versions" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
}
resource "selectel_mks_cluster_v1" "cluster_1" {
name = "high_availability_cluster"
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
kube_version = data.selectel_mks_kube_versions_v1.versions.latest_version
network_id = openstack_networking_network_v2.network_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
maintenance_window_start = "00:00:00"
}
Here. region
— pool in which the cluster will be created, e.g. ru-9
.
Check out the detailed description of the resource selectel_mks_cluster_v1.
Create a node group with a network disk
Fixed configurations
Arbitrary configuration
resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
cluster_id = selectel_mks_cluster_v1.cluster_1.id
project_id = selectel_mks_cluster_v1.cluster_1.project_id
region = selectel_mks_cluster_v1.cluster_1.region
availability_zone = "ru-9a"
nodes_count = "2"
flavor_id = "1011"
volume_gb = 32
volume_type = "fast.ru-9a"
labels = {
"label-key0": "label-value0",
"label-key1": "label-value1",
"label-key2": "label-value2",
}
}
Here:
availability_zone
— pool segment where the group of nodes will be located, e.g.ru-9a
;nodes_count
— number of working nodes in the node group. The maximum number of nodes is 15;flavor_id
— Flavor ID. The flavors correspond to cloud server configurations and determine the number of vCPUs, RAM, and local disk size (optional) of the node. For example,3031
— Flavor to create a node with GPU Line configuration with 4 vCPUs, 32 GB RAM. You can see the list of flavors in a specific pool in the Openstack CLI;volume_gb
— the disk size in GB. If the disk size is specified in the configuration you selected in the argumentflavor_id
then the argumentvolume_gb
you don't have to specify;volume_type
— disk type in format<type>.<pool_segment>
for examplebasic.ru-9a
:<type>
—basic
,universal
orfast
;<pool_segment>
— pool segment where the network drive will be created, e.g.ru-9a
.
Check out the detailed description of the resource selectel_mks_nodegroup_v1.
resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
cluster_id = selectel_mks_cluster_v1.cluster_1.id
project_id = selectel_mks_cluster_v1.cluster_1.project_id
region = selectel_mks_cluster_v1.cluster_1.region
availability_zone = "ru-9a"
nodes_count = "2"
cpus = 2
ram_mb = 4096
volume_gb = 32
volume_type = "fast.ru-9a"
labels = {
"label-key0": "label-value0",
"label-key1": "label-value1",
"label-key2": "label-value2",
}
}
Here:
availability_zone
— pool segment where the node group will be located, e.g.ru-9a
;nodes_count
— number of working nodes in the node group. The maximum number of nodes is 15;cpus
— number of vCPUs for each node;ram_mb
— the amount of RAM for each node in MB;volume_gb
— disk size in GB;volume_type
— disk type in format<type>.<pool_segment>
for examplebasic.ru-9a
:<type>
—basic
,universal
orfast
;<pool_segment>
— pool segment where the network drive will be created, e.g.ru-9a
.
Check out the detailed description of the resource selectel_mks_nodegroup_v1.