Skip to main content
Load Balancers
Last update:

Load Balancers

The cloud load balancer distributes incoming network traffic between cloud servers in a single pool. A load balancer can be used to increase the availability of services — it will optimally distribute requests between servers and reduce the load. If one server fails, the load balancer will redirect traffic to another suitable server.

The load balancer operates at levels L3-L4 (network load balancer) and L7 (application load balancer). To balance HTTPS traffic, TLS(SSL) certificates from secret manager are used, for more information, see the instructions TLS(SSL)-load balancer certificates.

You can work with the load balancer in the [control panel] (https://my.selectel.ru/vpc/lbaas /), via OpenStack CLI or Terraform.

Load balancer types

Basic without reservationBasic with reservationAdvanced with redundancy
Number of instancesOneTwoTwo
Instance Configuration2 vCPUs, 1 GB RAM2 vCPUs, 1 GB RAM4 vCPUs, 2 GB RAM
Fault tolerance and redundancySingle mode onlyEmergency switching (Active-Standby Failover) to a backup instance in the same poolEmergency switching (Active-Standby Failover) to a backup instance in the same pool
What is suitable forFor test environments or projects that do not require 24/7 service availabilityFor small and medium-sized projects for which the availability of the service is criticalFor projects with a high workload and a requirement for constant service availability
Bandwidth Up to 3 Gbit/s. It can be increased to 5 Gbit/s — [create a ticket](https://my.selectel.ru/tickets/create /)Up to 3 Gbit/s. It can be increased to 5 Gbit/s — [create a ticket](https://my.selectel.ru/tickets/create /)Up to 3 Gbit/s. It can be increased to 5 Gbit/s — [create a ticket](https://my.selectel.ru/tickets/create /)
Number of HTTP requests per second (RPS)~19 500~19 500~34 000
The number of HTTPS requests with termination on the load balancer per second (RPS)~3,000 keep-alive connections (with 10,000 simultaneous TCP connections)~3,000 keep-alive connections (with 10,000 simultaneous TCP connections)~9,000 keep-alive connections (with 10,000 simultaneous TCP connections)

If the types are not suitable, you can order a custom type of load balancer — [create a ticket](https://my.selectel.ru/tickets/create /).

List of flavors of the load balancer

Flavors correspond to load balancer types and determine the number of vCPUs, RAM, and the number of instances of the load balancer.

To create load balancers via the OpenStack CLI and Terraform, flavor IDs or names are used. The IDs differ in pools.

note

For example, ac18763b-1fc5-457d-9fa7-b0d339ffb336 is the ID, and AMPH1.ACT_STNDB.4-2048 is the name of the flavor, which corresponds to the Advanced type with redundancy in the ru-9 pool.

You can view the list of flavors of the load balancer in all pools in the table or view the list of flavors of the load balancer in a specific pool via the OpenStack CLI.

List of flavors of the load balancer in all pools

IDName
d4490352-a58a-44b7-b226-717cd7607c0eAMPH1.SNGL.2-1024
dbf2523f-39a5-4f34-be74-07eb3f111171AMPH1.ACT_STNDB.2-1024
ea49b7dd-c126-4b22-8a2c-2eb65cbda662AMPH1.ACT_STNDB.4-2048

Here:

  • ID — The flavor ID of the load balancer;
  • `Name' is the flavor name that corresponds to load balancer type:
    • AMPH1.SNGL.2-1024 — Basic type without redundancy;
    • AMPH1.ACT_STNDB.2-1024 — Basic type with redundancy;
    • AMPH1.ACT_STNDB.4-2048 — Advanced type with redundancy.

View the list of flavors of the load balancer in a specific pool

  1. Open the OpenStack CLI.

  2. View the list of flavors:

    openstack loadbalancer flavor list -c id -c name

    Sample response for the ru-9 pool:

    +--------------------------------------+------------------------+
    | id | name |
    +--------------------------------------+------------------------+
    | 3265f75f-01eb-456d-9088-44b813d29a60 | AMPH1.SNGL.2-1024 |
    | d3b8898c-af94-47f8-9996-65b9c6aa95e2 | AMPH1.ACT_STNDB.2-1024 |
    | ac18763b-1fc5-457d-9fa7-b0d339ffb336 | AMPH1.ACT_STNDB.4-2048 |
    +--------------------------------------+------------------------+

    Here:

    • id — The flavor ID of the load balancer;
    • name is the flavor name that corresponds to load balancer type:
  • AMPH1.SNGL.2-1024 is the Basic type without redundancy;
    • AMPH1.ACT_STNDB.2-1024 — Basic type with redundancy;
    • AMPH1.ACT_STNDB.4-2048 — Advanced type with redundancy.

How the load balancer works

Load balancer operation diagram
Load balancer operation diagram

The load balancer uses the [OpenStack Octavia] model (https://docs.openstack.org/octavia/queens/index.html ), which includes:

  • instance (amphora) — performs load balancing. It runs on a cloud server and uses HAProxy (High-Availability Proxy), a software for traffic proxying. In load balancers with redundancy (types Basic with redundancy and Advanced with redundancy) two instances are created, without redundancy — one;
  • target group (pool) — a group of servers to which the rule redirects requests using the protocol specified for the group;
  • servers (members) — servers that serve traffic in the pool. Available by the IP address and port that are specified for the server within the target group;
  • availability checks (health monitor) — the process of checking the health of all servers in the target group;
  • rule (listener) — listens for the traffic flow coming to the load balancer using the [protocols] (#protocols) and ports specified in the rule. Then it routes traffic to the required group of servers;
  • HTTP policy (L7 Policy) — additional conditions in the rule for routing HTTP traffic with certain parameters.

Target groups

The target group is a group of servers to which traffic from the load balancer is distributed. A server can be part of several target groups of the same load balancer if different ports are specified for the server in these groups.

For the target group, you can configure:

Availability checks

You can enable accessibility checking for the target group. The load balancer will monitor the status of the servers — if any server turns out to be inoperable, the load balancer will redirect the connection to another one.

Verification Parameters:

  • type of verification. Depending on the protocol of the target group, the following types are available:

    • TCP group — TCP, PING;
    • PROXY group — TLS-HELLO, HTTP, TCP, PING;
    • UDP group — UDP-CONNECT, PING;
    • HTTP group — HTTP, TCP, PING;
  • for the HTTP verification protocol, you can configure URL access and expected response codes;

  • interval between checks — the interval in seconds with which the load balancer sends checking requests to the servers;

  • Connection timeout — waiting time for a response;

  • success threshold — the number of successful requests in a row, after which the server is put into operation;

  • failure threshold — the number of unsuccessful requests in a row, after which the server operation is suspended.

Rules

The rule is the load balancer settings that serve the traffic flow with a specific port and protocol and distribute this traffic to the desired group of servers.

In the rule, you can configure:

  • protocols and incoming traffic ports of the load balancer;
  • [HTTP policies](#http policies) for additional routing of HTTP traffic;
  • connections passing through the load balancer;
  • select target group servers.

The number of rules in the load balancer is unlimited.

HTTP policies

HTTP policy is an addition to the rule that allows you to route certain HTTP and HTTPS traffic separately from the rest of the traffic:

  • direct to another target group (REDIRECT_TO_POOL);
  • direct to URL — completely replace the request URL, including protocol, domain name, path and request parameters (REDIRECT_TO_URL);
  • direct to URL prefix — replace protocol and domain name in the request URL (REDIRECT_PREFIX);
  • REJECT.

The request is redirected according to the first appropriate policy. The order in which policies are applied depends on the policy action: REJECT policies are applied first, then REDIRECT_TO_URL and REDIRECT_PREFIX, then REDIRECT_TO_POOL. If there are several policies in a rule with the same action, they are applied according to the policy position in the rule. You can change the order in which policies are applied.

An HTTP policy consists of a set of conditions, the number of conditions in the policy is unlimited. In order for a request to fall under the policy, it must meet all the conditions of the policy. The condition specifies:

  • request parameter to check: HOST_NAME or PATH'. When configuring the policy through the Openstack CLI, you can also create a condition for the parameters COOKIE, FILE_TYPEandHEADER`;
  • the control value for checking is the exact value or regular expression;
  • the type of match with the control value: EQUAL TO, STARTS WITH, ENDS WITH, CONTAINS, REGEX.

The number of HTTP policies in the rule is unlimited.

Load balancer ports

Load balancer instances use multiple ports:

  • Incoming port (uplink). This is a virtual port that hosts a VIP virtual IP address. On it, the rule listens for incoming traffic. It is allocated when creating a load balancer and is located in its subnet. For redundant load balancers (types Basic with redundancy and Advanced with redundancy) VIP is reserved via VRRP protocol;
  • Service VRRP ports. When creating a basic load balancer, one service port is allocated in its subnet. When creating a redundant load balancer, two service ports are allocated for the primary and backup instances, and VRRP is configured between them;
  • service ports (downlinks). If the servers are not located in the subnet of the load balancer, then when it is created, ports for instances are allocated in subnets with servers: one port for the basic load balancer, two ports for redundant load balancers (main and backup).

If there are problems with the load balancer, it automatically creates a new instance and only then deletes the old one — for this you need a free port. If there is no free port, the load balancer will switch to the ERROR status.

If, when creating a load balancer, you selected a public subnet as the load balancer subnet and will host servers in it, then make sure that it has an additional IP address, or use a public network with a size of /28.

We recommend choosing a private network with a public IP address (the address is needed for Internet access) — in this case, a free IP address will always be available for instance re—creation. Traffic balancing will be performed inside the private network.

Protocols

Protocol combinations are available:

  • TCP–TCP — classic L4 balancing;
  • TCP PROXY — client information is not lost and is transmitted in a separate connection header;
  • UDP–UDP — UDP protocol is faster than TCP, but less reliable;
  • HTTP–HTTP — L7-balancing;
  • HTTPS–HTTP — L7-balancing with encryption and termination of the SSL certificate on the load balancer.

Query distribution algorithms

The rule distributes requests according to the selected algorithm. Two algorithms are available:

  • Round Robin is a round—robin algorithm. The first request is sent to one server, the next request is sent to another, and so on until the last server is reached. Then the cycle starts over. Requests are distributed to servers according to the specified weight.
  • Least connections — the algorithm takes into account the number of connections to the servers. A new request is sent to the server with the least number of active connections, the weight of the server is not taken into account.

Sticky Sessions

Additionally, Sticky Sessions can be enabled. The method is necessary when the end application keeps a long-term connection with each client and saves the internal state of the data, which is not synchronized between servers in the rule.

New requests will be distributed according to the selected algorithm, and then the session will be assigned to the server that started processing requests. All subsequent requests from this session will be distributed to the server, without taking into account the selected algorithm. If the server is unavailable, the request will be redirected to another one.

Session definition parameters can be configured to balance sessions or balance one client per server. You can identify the session:

  • by APP cookie — an existing cookie that is set in the application code;
  • by HTTP cookie — a cookie that is created and attached to the session by the load balancer;
  • by Source IP — The client's IP address is hashed and divided by the weight of each server in the target group — this is how the server that will process requests is determined.

Connection settings

You can set the settings for connections passing through the load balancer — between incoming requests and the load balancer, the load balancer and the servers.

Connection Settings:

  • Connection timeout — waiting time for a response;
  • maximum connections — the maximum number of active connections;
  • inactivity timeout — the time during which the connection is considered active, even if no data is transmitted;
  • TCP packet timeout — the time during which the load balancer waits for data transmission for inspection over an already established connection.

HTTP request headers

In normal operation, the load balancer transmits only the original HTTP request body to the server, replacing the client's IP address with its own.

Include the necessary types of additional headers in the request so that the servers receive this information for correct operation or analysis:

  • X-Forwarded-For — the IP address from which the request came;
  • X-Forwarded-Port is the port of the load balancer that the request came to;
  • X-Forwarded-Proto — the original connection protocol;
  • X-SSL-Client-Verify — whether the client used a secure connection;
  • X-SSL-Client-Has-Cert — whether the client has a certificate;
  • X-SSL-Client-DN — owner's identification information;
  • X-SSL-Client-CN is the name of the host for which the certificate was issued;
  • X-SSL-Issuer — the certification authority where the certificate was issued;
  • X-SSL-Client-SHA1 — SHA1-fingerprint of the client certificate;
  • X-SSL-Client-Not-Before — certificate validity period;
  • X-SSL-Client-Not-After — expiration of the certificate.

Cost

Load balancers are paid using the cloud platform payment model.

The cost of load balancers can be viewed at [selectel.ru ](https://selectel.ru/prices /).