Skip to main content
Create a cloud server, a Managed Kubernetes cluster and a cloud database cluster in a private subnetwork
Last update:

Create a cloud server, a Managed Kubernetes cluster and a cloud database cluster in a private subnetwork

This is an example of building an infrastructure that consists of:

  • from the private subnet 192.168.199.0/24;
  • a cloud server of arbitrary configuration with a bootable network disk and an additional network disk;
  • fault-tolerant Managed Kubernetes cluster with nodes of arbitrary configuration;
  • MySQL semi-sync cluster of arbitrary configuration.

We recommend that you create resources in order. If you create all resources at once, Terraform will take into account the dependencies between resources that you specify in the configuration file. If dependencies are not specified, resources will be created in parallel, which can cause errors. For example, a resource that is required to create another resource may not have been created yet.


  1. Optional: configure the providers.

  2. Add a public SSH key.

  3. Create a private network and subnet.

  4. Create a cloud router connected to an external network.

  5. Create a cloud server.

  6. Create a Managed Kubernetes cluster.

  7. Create a cloud database cluster.

Configuration files

Example file for configuring providers
terraform {
required_providers {
selectel = {
source = "selectel/selectel"
version = "~> 6.0"
}
openstack = {
source = "terraform-provider-openstack/openstack"
version = "2.1.0"
}
}
}

provider "selectel" {
domain_name = "123456"
username = "user"
password = "password"
auth_region = "ru-9"
auth_url = "https://cloud.api.selcloud.ru/identity/v3/"
}

resource "selectel_vpc_project_v2" "project_1" {
name = "project"
}

resource "selectel_iam_serviceuser_v1" "serviceuser_1" {
name = "username"
password = "password"
role {
role_name = "member"
scope = "project"
project_id = selectel_vpc_project_v2.project_1.id
}
}

provider "openstack" {
auth_url = "https://cloud.api.selcloud.ru/identity/v3"
domain_name = "123456"
tenant_id = selectel_vpc_project_v2.project_1.id
user_name = selectel_iam_serviceuser_v1.serviceuser_1.name
password = selectel_iam_serviceuser_v1.serviceuser_1.password
region = "ru-9"
}
Example file for creating an arbitrary configuration server with a bootable network drive and an additional network drive
resource "selectel_vpc_keypair_v2" "keypair_1" {
name = "keypair"
public_key = file("~/.ssh/id_rsa.pub")
user_id = selectel_iam_serviceuser_v1.serviceuser_1.id
}

resource "openstack_compute_flavor_v2" "flavor_1" {
name = "custom-flavor-with-network-volume"
vcpus = 2
ram = 2048
disk = 0
is_public = false

lifecycle {
create_before_destroy = true
}

}

resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "subnet_1" {
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}

data "openstack_networking_network_v2" "external_network_1" {
external = true
}

resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}

resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}

resource "openstack_networking_port_v2" "port_1" {
name = "port"
network_id = openstack_networking_network_v2.network_1.id

fixed_ip {
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}
}

data "openstack_images_image_v2" "image_1" {
name = "Ubuntu 20.04 LTS 64-bit"
most_recent = true
visibility = "public"
}

resource "openstack_blockstorage_volume_v3" "volume_1" {
name = "boot-volume-for-server"
size = "5"
image_id = data.openstack_images_image_v2.image_1.id
volume_type = "fast.ru-9a"
availability_zone = "ru-9a"
enable_online_resize = true

lifecycle {
ignore_changes = [image_id]
}

}

resource "openstack_blockstorage_volume_v3" "volume_2" {
name = "additional-volume-for-server"
size = "7"
volume_type = "universal.ru-9a"
availability_zone = "ru-9a"
enable_online_resize = true
}

resource "openstack_compute_instance_v2" "server_1" {
name = "server"
flavor_id = openstack_compute_flavor_v2.flavor_1.id
key_pair = selectel_vpc_keypair_v2.keypair_1.name
availability_zone = "ru-9a"

network {
port = openstack_networking_port_v2.port_1.id
}

lifecycle {
ignore_changes = [image_id]
}

block_device {
uuid = openstack_blockstorage_volume_v3.volume_1.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}

block_device {
uuid = openstack_blockstorage_volume_v3.volume_2.id
source_type = "volume"
destination_type = "volume"
boot_index = -1
}

vendor_options {
ignore_resize_confirmation = true
}
}

resource "openstack_networking_floatingip_v2" "floatingip_1" {
pool = "external-network"
}

resource "openstack_networking_floatingip_associate_v2" "association_1" {
port_id = openstack_networking_port_v2.port_1.id
floating_ip = openstack_networking_floatingip_v2.floatingip_1.address
}

output "public_ip_address" {
value = openstack_networking_floatingip_v2.floatingip_1.fixed_ip
}
Example file for creating a fault-tolerant Managed Kubernetes cluster with nodes of arbitrary configuration
resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "subnet_1" {
name = "private-subnet"
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}

data "openstack_networking_network_v2" "external_network_1" {
external = true
}

resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}

resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}

data "selectel_mks_kube_versions_v1" "versions" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
}

resource "selectel_mks_cluster_v1" "cluster_1" {
name = "high_availability_cluster"
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
kube_version = data.selectel_mks_kube_versions_v1.versions.latest_version
network_id = openstack_networking_network_v2.network_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
maintenance_window_start = "00:00:00"
}

resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
cluster_id = selectel_mks_cluster_v1.cluster_1.id
project_id = selectel_mks_cluster_v1.cluster_1.project_id
region = selectel_mks_cluster_v1.cluster_1.region
availability_zone = "ru-9a"
nodes_count = "2"
cpus = 2
ram_mb = 4096
volume_gb = 32
volume_type = "fast.ru-9a"
install_nvidia_device_plugin = false
labels = {
"label-key0": "label-value0",
"label-key1": "label-value1",
"label-key2": "label-value2",
}
}
Example file for creating a MySQL semi-sync cluster of arbitrary configuration
resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "subnet_1" {
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}

data "selectel_dbaas_datastore_type_v1" "datastore_type_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
filter {
engine = "mysql_native"
version = "8"
}
}

resource "selectel_dbaas_mysql_datastore_v1" "datastore_1" {
name = "datastore-1"
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
type_id = data.selectel_dbaas_datastore_type_v1.datastore_type_1.datastore_types[0].id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
node_count = 3
flavor {
vcpus = 1
ram = 4096
disk = 32
}
}

resource "selectel_dbaas_user_v1" "user_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
datastore_id = selectel_dbaas_mysql_datastore_v1.datastore_1.id
name = "user"
password = "secret"
}

resource "selectel_dbaas_mysql_database_v1" "database_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
datastore_id = selectel_dbaas_mysql_datastore_v1.datastore_1.id
name = "database_1"
}

resource "selectel_dbaas_grant_v1" "grant_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
datastore_id = selectel_dbaas_mysql_datastore_v1.datastore_1.id
database_id = selectel_dbaas_mysql_database_v1.database_1.id
user_id = selectel_dbaas_user_v1.user_1.id
}

1. Optional: configure providers

If you have configured Selectel and OpenStack providers, skip this step.

  1. Ensure that in the Control Panel you have created a service user with the Account Administrator and User Administrator roles.

  2. Create a directory to store the configuration files and a separate file with a .tf extension to configure the providers.

  3. Add Selectel and OpenStack providers to the file to configure the providers:

    terraform {
    required_providers {
    selectel = {
    source = "selectel/selectel"
    version = "~> 6.0"
    }
    openstack = {
    source = "terraform-provider-openstack/openstack"
    version = "2.1.0"
    }
    }
    }

    Here version — versions of providers. The current version can be found in Selectel (in Terraform Registry and GitHub) and OpenStack (in Terraform Registry and GitHub) documentation.

    Learn more about the products, services, and services that can be managed with providers in the Selectel and OpenStack Providers instruction.

  4. Initialize the Selectel provider:

    provider "selectel" {
    domain_name = "123456"
    username = "user"
    password = "password"
    auth_region = "ru-9"
    auth_url = "https://cloud.api.selcloud.ru/identity/v3/"
    }

    Here:

    • domain_name — Selectel account number. You can look it up in control panel in the upper right corner;
    • username — username service user with the roles Account Administrator and User Administrator. Can be viewed in the control panel section Access ControlUser Management → tab Service Users (the section is available only to the Account Owner and User Administrator);
    • password — password of the service user. You can view it when creating a user or change it to a new one;
    • auth_region — pool for example ru-9. All resources will be created in this pool. The list of available pools can be found in the instructions Availability matrices.
  5. Create a project:

    resource "selectel_vpc_project_v2" "project_1" {
    name = "project"
    }

    View a detailed description of the selectel_vpc_project_v2 resource.

  6. Create a service user to access the project and assign the Project Administrator role to it:

    resource "selectel_iam_serviceuser_v1" "serviceuser_1" {
    name = "username"
    password = "password"
    role {
    role_name = "member"
    scope = "project"
    project_id = selectel_vpc_project_v2.project_1.id
    }
    }

    Here:

    • username — username;
    • password — user password. The password must be no shorter than eight characters and contain Latin letters of different cases and digits;
    • project_id — Project ID. You can view it in control panel: section Cloud Platform → open the projects menu (the name of the current project) → in the line of the required project press .

    View a detailed description of the selectel_iam_serviceuser_v1 resource.

  7. Initialize the OpenStack provider:

    provider "openstack" {
    auth_url = "https://cloud.api.selcloud.ru/identity/v3"
    domain_name = "123456"
    tenant_id = selectel_vpc_project_v2.project_1.id
    user_name = selectel_iam_serviceuser_v1.serviceuser_1.name
    password = selectel_iam_serviceuser_v1.serviceuser_1.password
    region = "ru-9"
    }

    Here:

    • domain_name — Selectel account number. You can look it up in control panel in the upper right corner;
    • region — pool for example ru-9. All resources will be created in this pool. The list of available pools can be found in the instructions Availability matrices.
  8. If you create resources at the same time as configuring providers, add the depends_on argument for OpenStack resources . For example, for the resource openstack_networking_network_v2:

    resource "openstack_networking_network_v2" "network_1" {
    name = "private-network"
    admin_state_up = "true"

    depends_on = [
    selectel_vpc_project_v2.project_1,
    selectel_iam_serviceuser_v1.serviceuser_1
    ]
    }
  9. Optional: if you want to use a mirror, create a separate Terraform CLI configuration file and add a block to it:

    provider_installation {
    network_mirror {
    url = "https://tf-proxy.selectel.ru/mirror/v1/"
    include = ["registry.terraform.io/*/*"]
    }
    direct {
    exclude = ["registry.terraform.io/*/*"]
    }
    }

    See the CLI Configuration File instructions in HashiCorp's CLI Configuration File documentation for more information on configuring mirrors.

  10. Open the CLI.

  11. Initialize the Terraform configuration in the directory:

    terraform init
  12. Check that the configuration files have been compiled without errors:

    terraform validate
  13. Format the configuration files:

    terraform fmt
  14. Check the resources that will be created:

    terraform plan
  15. Apply the changes and create the resources:

    terraform apply
  16. Confirm the creation — type yes and press Enter. The created resources are displayed in the control panel.

  17. If there were not enough quotas to create resources, increase the quotas.

2. Add a public SSH key

resource "selectel_vpc_keypair_v2" "keypair_1" {
name = "keypair"
public_key = file("~/.ssh/id_rsa.pub")
user_id = selectel_iam_serviceuser_v1.serviceuser_1.id
}

Here public_key is the path to the public SSH key. If SSH keys are not generated, create them.

View a detailed description of the selectel_vpc_keypair_v2 resource.

3.Create a private network and subnetwork

resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "subnet_1" {
name = "private-subnet"
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}

Here cidr is the CIDR of the private subnet, for example 192.168.199.0/24.

See a detailed description of the resources:

4. Create a cloud router connected to an external network

A cloud router connected to an external network acts as a 1:1 NAT for access from a private network to the Internet through the public IP address of the router.

data "openstack_networking_network_v2" "external_network_1" {
external = true
}

resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}

resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}

See a detailed description of the resources:

5. Create a cloud server

  1. Create a port for the cloud server.

  2. Get an image.

  3. Create a bootable disk.

  4. Create a server.

  5. Create a public IP address.

  6. Assign the association of the public and private IP address of the cloud server.

1. Create a port for the cloud server

resource "openstack_networking_port_v2" "port_1" {
name = "port"
network_id = openstack_networking_network_v2.network_1.id

fixed_ip {
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}
}

View a detailed description of the openstack_networking_port_v2 resource.

2. Get an image

data "openstack_images_image_v2" "image_1" {
name = "Ubuntu 20.04 LTS 64-bit"
most_recent = true
visibility = "public"
}

See the detailed description of the openstack_images_image_v2 data source.

3. Create a bootable network disk

resource "openstack_blockstorage_volume_v3" "volume_1" {
name = "boot-volume-for-server"
size = "5"
image_id = data.openstack_images_image_v2.image_1.id
volume_type = "fast.ru-9a"
availability_zone = "ru-9a"
enable_online_resize = true

lifecycle {
ignore_changes = [image_id]
}

}

Here:

View a detailed description of the openstack_blockstorage_volume_v3 resource.

4. Create a cloud server

resource "openstack_compute_instance_v2" "server_1" {
name = "server"
flavor_id = "4011"
key_pair = selectel_vpc_keypair_v2.keypair_1.name
availability_zone = "ru-9a"

network {
port = openstack_networking_port_v2.port_1.id
}

lifecycle {
ignore_changes = [image_id]
}

block_device {
uuid = openstack_blockstorage_volume_v3.volume_1.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}

vendor_options {
ignore_resize_confirmation = true
}
}

Here:

  • availability_zone — pool segment in which the cloud server will be created, e.g. ru-9a. The list of available pool segments can be found in the instructions. Availability matrices;
  • flavor_id — Flavor ID. The flavors correspond to cloud server configurations and determine the number of vCPUs, RAM, and local disk size (optional) of the server. You can use flavorors of fixed configurations. For example, 4011 — ID to create a Memory Line fixed configuration server with 2 vCPUs, 16 GB RAM in a ru-9 pool. The list of flavors can be viewed in the table List of fixed configuration flavors in all pools.

See the detailed description of the openstack_compute_instance_v2 resource.

5. Create a public IP address

resource "openstack_networking_floatingip_v2" "floatingip_1" {
pool = "external-network"
}

View a detailed description of the openstack_networking_floatingip_v2 resource.

6. Assign an association between the public and private IP address of the cloud server

The public IP address will be connected to the cloud server port and associated with the private IP.

resource "openstack_networking_floatingip_associate_v2" "association_1" {
port_id = openstack_networking_port_v2.port_1.id
floating_ip = openstack_networking_floatingip_v2.floatingip_1.address
}

View a detailed description of the openstack_networking_floatingip_associate_v2 resource.

6. Create a Managed Kubernetes cluster

  1. Create a fault-tolerant cluster.

  2. Create a node group of arbitrary configuration with a network disk.

1. Create a fault-tolerant cluster

data "selectel_mks_kube_versions_v1" "versions" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
}

resource "selectel_mks_cluster_v1" "cluster_1" {
name = "high_availability_cluster"
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
kube_version = data.selectel_mks_kube_versions_v1.versions.latest_version
network_id = openstack_networking_network_v2.network_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
maintenance_window_start = "00:00:00"
}

Here region is the pool where the cluster will be created, for example ru-9.

View a detailed description of the selectel_mks_cluster_v1 resource.

2. Create a group of nodes of arbitrary configuration with a network disk

  resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
cluster_id = selectel_mks_cluster_v1.cluster_1.id
project_id = selectel_mks_cluster_v1.cluster_1.project_id
region = selectel_mks_cluster_v1.cluster_1.region
availability_zone = "ru-9a"
nodes_count = "2"
cpus = 2
ram_mb = 4096
volume_gb = 32
volume_type = "fast.ru-9a"
install_nvidia_device_plugin = false
labels = {
"label-key0": "label-value0",
"label-key1": "label-value1",
"label-key2": "label-value2",
}
}

Here:

  • availability_zone — pool segment in which the group of nodes will be located, e.g. ru-9a;

  • nodes_count — number of working nodes in the node group. The maximum number of nodes is 15;

  • cpus — number of vCPUs for each node;

  • ram_mb — the amount of RAM for each node in MB;

  • volume_gb — disk size in GB;

  • volume_type — disk type format <type>.<pool_segment> for example basic.ru-9a:

    • <type> — basic, universal or fast;
    • <pool_segment> — pool segment in which the network disk will be created, e.g. ru-9a;
  • install_nvidia_device_plugin — confirms or cancel the installation of the GPU drivers and NVIDIA® Device Plugin:

    • true — for GPU flavors confirms that the GPU drivers and NVIDIA® Device Plugin are installed;
    • false — for both GPU and non-GPU flavors will cancel the installation of the GPU drivers and NVIDIA® Device Plugin. You can independently install drivers for node groups with GPUs.

View a detailed description of the selectel_mks_nodegroup_v1 resource.

7. Create a cloud database cluster

  1. Create a MySQL semi-sync cluster.

  2. Create a user.

  3. Create a database.

  4. Grant the user access to the database.

1. Create MySQL semi-sync cluster of arbitrary configuration

  data "selectel_dbaas_datastore_type_v1" "datastore_type_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
filter {
engine = "mysql_native"
version = "8"
}
}

resource "selectel_dbaas_mysql_datastore_v1" "datastore_1" {
name = "datastore-1"
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
type_id = data.selectel_dbaas_datastore_type_v1.datastore_type_1.datastore_types[0].id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
node_count = 3
flavor {
vcpus = 1
ram = 4096
disk = 32
}
}

Here:

  • region — pool, e.g. ru-9. The list of available pools can be found in the instructions Availability matrices;
  • filter — cloud database type filter:
    • engine — cloud database type. For a MySQL semi-sync cluster, specify mysql_native;
    • version — version of the cloud database. For a list of available versions, see the instructions Versions and configurations;
  • nodes_count — number of nodes. The maximum number of nodes is 6;
  • flavor — arbitrary cluster configuration. The available values of the arbitrary configurations can be seen in the instructions Versions and configurations:
    • vcpus — number of vCPUs;
    • ram — the amount of RAM in MB;
    • disk — disk size in GB.

View a detailed description of the data sources and resources:

2. Create a user

resource "selectel_dbaas_user_v1" "user_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
datastore_id = selectel_dbaas_mysql_datastore_v1.datastore_1.id
name = "user"
password = "secret"
}

Here:

  • region — pool in which the cluster is located;
  • name — user name;
  • password — user password.

View the detailed resource description of selectel_dbaas_user_v1.

3. Create a database

resource "selectel_dbaas_mysql_database_v1" "database_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
datastore_id = selectel_dbaas_mysql_datastore_v1.datastore_1.id
name = "database_1"
}

Here region is the pool in which the cluster resides.

View the detailed resource description of selectel_dbaas_mysql_database_v1.

4. Grant the user access to the database

resource "selectel_dbaas_grant_v1" "grant_1" {
project_id = selectel_vpc_project_v2.project_1.id
region = "ru-9"
datastore_id = selectel_dbaas_mysql_datastore_v1.datastore_1.id
database_id = selectel_dbaas_mysql_database_v1.database_1.id
user_id = selectel_dbaas_user_v1.user_1.id
}

View the detailed resource description of selectel_dbaas_grant_v1.