Skip to main content
Example of building an infrastructure with a cloud load balancer
Last update:

Example of building an infrastructure with a cloud load balancer

We recommend create resources in order. If you create all the resources that are described in the configuration file The Terraform creates resources regardless of the order in which they are listed in the file.


  1. Optional: configure your ISPs.
  2. Create a private network and subnet.
  3. Create a cloud router connected to an external network.
  4. Create a cloud server.
  5. Create a cloud-based load balancer.
  6. Create a public IP address and connect it to the load balancer.
  7. Get the IP address of the load balancer.

Configuration files

Example file for configuring providers
terraform {
required_providers {
selectel = {
source = "selectel/selectel"
version = "6.0.0"
}
openstack = {
source = "terraform-provider-openstack/openstack"
version = "2.1.0"
}
}
}

provider "selectel" {
domain_name = "123456"
username = "user"
password = "password"
}

resource "selectel_vpc_project_v2" "project_1" {
name = "project"
}

resource "selectel_iam_serviceuser_v1" "serviceuser_1" {
name = "username"
password = "password"
role {
role_name = "member"
scope = "project"
project_id = selectel_vpc_project_v2.project_1.id
}
}

provider "openstack" {
auth_url = "https://cloud.api.selcloud.ru/identity/v3"
domain_name = "123456"
tenant_id = selectel_vpc_project_v2.project_1.id
user_name = selectel_iam_serviceuser_v1.serviceuser_1.name
password = selectel_iam_serviceuser_v1.serviceuser_1.password
region = "ru-9"
}
Example file for building an infrastructure with a cloud load balancer
resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "subnet_1" {
name = "private-subnet"
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}

data "openstack_networking_network_v2" "external_network_1" {
external = true
}

resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}

resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}

resource "selectel_vpc_keypair_v2" "keypair_1" {
name = "keypair"
public_key = file("~/.ssh/id_rsa.pub")
user_id = selectel_iam_serviceuser_v1.serviceuser_1.id
}

resource "openstack_networking_port_v2" "port_1" {
name = "port"
network_id = openstack_networking_network_v2.network_1.id

fixed_ip {
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}
}

data "openstack_images_image_v2" "image_1" {
name = "Ubuntu 20.04 LTS 64-bit"
most_recent = true
visibility = "public"
}

resource "openstack_blockstorage_volume_v3" "volume_1" {
name = "boot-volume-for-server"
size = "5"
image_id = data.openstack_images_image_v2.image_1.id
volume_type = "fast.ru-9a"
availability_zone = "ru-9a"
enable_online_resize = true

lifecycle {
ignore_changes = [image_id]
}

}

resource "openstack_compute_instance_v2" "server_1" {
name = "server"
flavor_id = "4011"
key_pair = selectel_vpc_keypair_v2.keypair_1.name
availability_zone = "ru-9a"

network {
port = openstack_networking_port_v2.port_1.id
}

lifecycle {
ignore_changes = [image_id]
}

block_device {
uuid = openstack_blockstorage_volume_v3.volume_1.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}

vendor_options {
ignore_resize_confirmation = true
}
}

resource "openstack_lb_loadbalancer_v2" "load_balancer_1" {
name = "load-balancer"
vip_subnet_id = openstack_networking_subnet_v2.subnet_1.id
flavor_id = "ac18763b-1fc5-457d-9fa7-b0d339ffb336"
}

resource "openstack_lb_listener_v2" "listener_1" {
name = "listener"
protocol = "TCP"
protocol_port = "80"
loadbalancer_id = openstack_lb_loadbalancer_v2.load_balancer_1.id
}

resource "openstack_lb_pool_v2" "pool_1" {
name = "pool"
protocol = "PROXY"
lb_method = "ROUND_ROBIN"
listener_id = openstack_lb_listener_v2.listener_1.id
}

resource "openstack_lb_member_v2" "member_1" {
name = "member"
subnet_id = openstack_networking_subnet_v2.subnet_1.id
pool_id = openstack_lb_pool_v2.pool_1.id
address = "192.168.199.4"
protocol_port = "80"
}

resource "openstack_lb_monitor_v2" "monitor_1" {
name = "monitor"
pool_id = openstack_lb_pool_v2.pool_1.id
type = "HTTP"
delay = "10"
timeout = "4"
max_retries = "5"
}

resource "openstack_networking_floatingip_v2" "floatingip_1" {
pool = "external-network"
port_id = openstack_lb_loadbalancer_v2.load_balancer_1.vip_port_id
}

output "public_ip_address" {
value = openstack_networking_floatingip_v2.floatingip_1.fixed_ip
}

optional: configure providers

If you're set up the ISPs Selectel and OpenStack, skip this step.

  1. Make sure that in the control panel you created a service user with the Account Administrator and User Administrator roles.

  2. Create a directory to store the configuration files and a separate file with the extension .tf to configure the ISPs.

  3. Add Selectel and OpenStack providers to the file to configure the providers:

    terraform {
    required_providers {
    selectel = {
    source = "selectel/selectel"
    version = "6.0.0"
    }
    openstack = {
    source = "terraform-provider-openstack/openstack"
    version = "2.1.0"
    }
    }
    }

    Here version — версии провайдеров. Актуальную версию можно посмотреть в документации Selectel (в Terraform Registry и GitHub) и OpenStack (в Terraform Registry и GitHub).

    Подробнее о продуктах, услугах и сервисах, которыми можно управлять с помощью провайдеров, в инструкции Провайдеры Selectel и OpenStack.

  4. Инициализируйте провайдер Selectel:

    provider "selectel" {
    domain_name = "123456"
    username = "user"
    password = "password"
    }

    Здесь:

    • domain_name — номер аккаунта Selectel. Можно посмотреть в панели управления в правом верхнем углу;
    • username — имя сервисного пользователя с ролями Администратор аккаунта и Администратор пользователей. Можно посмотреть в панели управления: раздел Управление доступомУправление пользователями → вкладка Сервисные пользователи (раздел доступен только Владельцу аккаунта и Администратору пользователей);
    • password — пароль сервисного пользователя. Можно посмотреть при создании пользователя или изменить на новый.
  5. Создайте проект:

    resource "selectel_vpc_project_v2" "project_1" {
    name = "project"
    }

    Посмотрите подробное описание ресурса selectel_vpc_project_v2.

  6. Создайте сервисного пользователя для доступа к проекту и назначьте ему роль Администратор проекта:

    resource "selectel_iam_serviceuser_v1" "serviceuser_1" {
    name = "username"
    password = "password"
    role {
    role_name = "member"
    scope = "project"
    project_id = selectel_vpc_project_v2.project_1.id
    }
    }

    Здесь:

    • username — имя пользователя;
    • password — пароль пользователя. Пароль должен быть не короче восьми символов и содержать латинские буквы разных регистров и цифры;
    • project_id — ID проекта. Можно посмотреть в панели управления: раздел Облачная платформа → откройте меню проектов (название текущего проекта) → в строке нужного проекта нажмите .

    Посмотрите подробное описание ресурса selectel_iam_serviceuser_v1.

  7. Инициализируйте провайдер OpenStack:

    provider "openstack" {
    auth_url = "https://cloud.api.selcloud.ru/identity/v3"
    domain_name = "123456"
    tenant_id = selectel_vpc_project_v2.project_1.id
    user_name = selectel_iam_serviceuser_v1.serviceuser_1.name
    password = selectel_iam_serviceuser_v1.serviceuser_1.password
    region = "ru-9"
    }

    Здесь:

    • domain_name — номер аккаунта Selectel. Можно посмотреть в панели управления в правом верхнем углу;
    • region — пул, например ru-9. Все ресурсы будут создаваться в этом пуле. Список доступных пулов можно посмотреть в инструкции Матрицы доступности.
  8. Если одновременно с настройкой провайдеров вы создаете ресурсы, то для ресурсов OpenStack добавьте аргумент depends_on. Например, для ресурса openstack_networking_network_v2:

    resource "openstack_networking_network_v2" "network_1" {
    name = "private-network"
    admin_state_up = "true"

    depends_on = [
    selectel_vpc_project_v2.project_1,
    selectel_iam_serviceuser_v1.serviceuser_1
    ]
    }
  9. Опционально: если вы хотите использовать зеркало, создайте отдельный конфигурационный файл Terraform CLI и добавьте в него блок:

    provider_installation {
    network_mirror {
    url = "https://tf-proxy.selectel.ru/mirror/v1/"
    include = ["registry.terraform.io/*/*"]
    }
    direct {
    exclude = ["registry.terraform.io/*/*"]
    }
    }

    Подробнее о настройках зеркал в инструкции CLI Configuration File документации HashiCorp.

  10. Откройте CLI.

  11. Инициализируйте конфигурацию Terraform в директории:

    terraform init
  12. Проверьте, что конфигурационные файлы составлены без ошибок:

    terraform validate
  13. Отформатируйте конфигурационные файлы:

    terraform fmt
  14. Проверьте, какие ресурсы будут созданы:

    terraform plan
  15. Примените изменения и создайте ресурсы:

    terraform apply
  16. Подтвердите создание — введите yes и нажмите Enter. Созданные ресурсы отобразятся в панели управления.

  17. Если для создания ресурсов оказалось недостаточно квот, увеличьте квоты.

Create a private network and subnet

resource "openstack_networking_network_v2" "network_1" {
name = "private-network"
admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "subnet_1" {
name = "private-subnet"
network_id = openstack_networking_network_v2.network_1.id
cidr = "192.168.199.0/24"
}

Here. cidr — CIDR of a private subnet, e.g. 192.168.199.0/24.

See a detailed description of the resources:

Create a cloud router connected to an external network

A cloud router connected to an external network acts as a 1:1 NAT for access from a private network to the Internet through the public IP address of the router.

data "openstack_networking_network_v2" "external_network_1" {
external = true
}

resource "openstack_networking_router_v2" "router_1" {
name = "router"
external_network_id = data.openstack_networking_network_v2.external_network_1.id
}

resource "openstack_networking_router_interface_v2" "router_interface_1" {
router_id = openstack_networking_router_v2.router_1.id
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}

See a detailed description of the resources:

Create a cloud server

  1. Create an SSH key pair.
  2. Create a port for the cloud server.
  3. Get an image.
  4. Create a bootable network disk.
  5. Create a cloud server.

Create an SSH key pair

resource "selectel_vpc_keypair_v2" "keypair_1" {
name = "keypair"
public_key = file("~/.ssh/id_rsa.pub")
user_id = selectel_iam_serviceuser_v1.serviceuser_1.id
}

Here. public_key — path to the public SSH key. If SSH keys have not been created, generate them.

Check out the detailed description of the resource selectel_vpc_keypair_v2.

Create a port for the cloud server

resource "openstack_networking_port_v2" "port_1" {
name = "port"
network_id = openstack_networking_network_v2.network_1.id

fixed_ip {
subnet_id = openstack_networking_subnet_v2.subnet_1.id
}
}

Check out the detailed description of the resource openstack_networking_port_v2.

Get an image

data "openstack_images_image_v2" "image_1" {
name = "Ubuntu 20.04 LTS 64-bit"
most_recent = true
visibility = "public"
}

Check out the detailed description of the data source openstack_images_image_v2.

Create a bootable network disk

resource "openstack_blockstorage_volume_v3" "volume_1" {
name = "boot-volume-for-server"
size = "5"
image_id = data.openstack_images_image_v2.image_1.id
volume_type = "fast.ru-9a"
availability_zone = "ru-9a"
enable_online_resize = true

lifecycle {
ignore_changes = [image_id]
}

}

Here:

Check out the detailed description of the resource openstack_blockstorage_volume_v3.

Create a cloud server

resource "openstack_compute_instance_v2" "server_1" {
name = "server"
flavor_id = "4011"
key_pair = selectel_vpc_keypair_v2.keypair_1.name
availability_zone = "ru-9a"

network {
port = openstack_networking_port_v2.port_1.id
}

lifecycle {
ignore_changes = [image_id]
}

block_device {
uuid = openstack_blockstorage_volume_v3.volume_1.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}

vendor_options {
ignore_resize_confirmation = true
}
}

Here:

  • availability_zone — pool segment where the cloud server will be created, e.g. ru-9a. The list of available pool segments can be found in the instructions Availability matrix;
  • flavor_id — Flavor ID. The flavors correspond to cloud server configurations and determine the number of vCPUs, RAM and local disk size (optional) of the server. You can use fixed configuration flavors. For example, 4011 — ID to create a Memory Line fixed-configuration server with 2 vCPUs, 16 GB RAM in a ru-9 pool. The list of flavors can be viewed in the table List of fixed-configuration flavorings in all pools.

Check out the detailed description of the resource openstack_compute_instance_v2.

Create a cloud-based load balancer

  1. Create a balancer.
  2. Create a rule.
  3. Create a task force.
  4. Add the server to the target group.
  5. Create an availability check.

Create a cloud-based load balancer

resource "openstack_lb_loadbalancer_v2" "load_balancer_1" {
name = "load-balancer"
vip_subnet_id = openstack_networking_subnet_v2.subnet_1.id
flavor_id = "ac18763b-1fc5-457d-9fa7-b0d339ffb336"
}

Here. flavor_id — Flavor ID. The flavors correspond to by load balancer type and determine the number of vCPUs, RAM, and the number of balancer instances. For example, ac18763b-1fc5-457d-9fa7-b0d339ffb336 — ID to create a balancer with type Advanced with reservation in the ru-9 pool. The list of flavors can be seen in the table List of load balancer flavorings in all pools.

Check out the detailed description of the resource openstack_lb_loadbalancer_v2.

Create a rule

resource "openstack_lb_listener_v2" "listener_1" {
name = "listener"
protocol = "TCP"
protocol_port = "80"
loadbalancer_id = openstack_lb_loadbalancer_v2.load_balancer_1.id
}

Here:

Check out the detailed description of the resource openstack_lb_listener_v2.

Create a task force

resource "openstack_lb_pool_v2" "pool_1" {
name = "pool"
protocol = "PROXY"
lb_method = "ROUND_ROBIN"
listener_id = openstack_lb_listener_v2.listener_1.id
}

Here:

Check out the detailed description of the resource openstack_lb_pool_v2.

Add the server to the target group

resource "openstack_lb_member_v2" "member_1" {
name = "member"
subnet_id = openstack_networking_subnet_v2.subnet_1.id
pool_id = openstack_lb_pool_v2.pool_1.id
address = "192.168.199.4"
protocol_port = "80"
}

Here:

  • address — the private IP address of the server, e.g. 192.168.199.4;
  • protocol_port — port for the servers in the rule.

Check out the detailed description of the resource openstack_lb_member_v2.

Create an availability check

resource "openstack_lb_monitor_v2" "monitor_1" {
name = "monitor"
pool_id = openstack_lb_pool_v2.pool_1.id
type = "HTTP"
delay = "10"
timeout = "4"
max_retries = "5"
}

Here:

  • type — test type for example HTTP;
  • delay — the interval in seconds at which the balancer sends check requests to servers;
  • timeout — connection timeout (time to wait for a response);
  • max_retries — the number of consecutive successful accesses after which the server is put into the operational state (success threshold).

Check out the detailed description of the resource openstack_lb_monitor_v2.

Create a public IP address and connect to the balancer

The public IP address will be connected to the load balancer port and associated with the private IP.

resource "openstack_networking_floatingip_v2" "floatingip_1" {
pool = "external-network"
port_id = openstack_lb_loadbalancer_v2.load_balancer_1.vip_port_id
}

Check out the detailed description of the resource openstack_networking_floatingip_v2.

Get the IP address of the load balancer

output "public_ip_address" {
value = openstack_networking_floatingip_v2.floatingip_1.fixed_ip
}