Skip to main content
Load balancers
Last update:

Load balancers

A cloud load balancer distributes incoming network traffic between cloud servers in a single bullet. A load balancer can be used to improve service availability by optimally distributing requests between servers and reducing the load. If one server fails, the load balancer will redirect traffic to another suitable server.

The balancer works at L3-L4 (network load balancer) and L7 (application load balancer) levels. For HTTPS traffic balancing, TLS(SSL) certificates from the manager of secrets, see instructions for details TLS(SSL)-certificates of the load balancer.

You can work with the load balancer in the control panels through OpenStack CLI or Terraform.

To track balancer metrics you can set up monitoring using Prometheus, more details on available metrics and how to set up monitoring in the instructions Cloud load balancer monitoring.

Types of load balancer

Basic without reservationBasic with reservationAdvanced with reservation
Number of instancesOneTwoTwo
Instance Configuration2 vCPU, 1 GB RAM2 vCPU, 1 GB RAM4 vCPUs, 2 GB RAM
Fault tolerance and redundancySingle mode onlyActive-Standby Failover to a standby instance in the same poolActive-Standby Failover to a standby instance in the same pool
What it's good forFor test environments or projects that do not require 24 / 7 service availabilityFor small and medium-sized projects for which service availability is criticalFor projects with high load and requirement for constant service availability
ThroughputUp to 3 Gbps. Can be increased up to 5 Gbps — file a ticketUp to 3 Gbps. Can be increased up to 5 Gbps — file a ticketUp to 3 Gbps. Can be increased up to 5 Gbps — file a ticket
Number of HTTP requests per second (RPS)~19 500~19 500~34 000
Number of HTTPS requests with termination on the balancer per second (RPS)~3,000 keep-alive connections (with 10,000 simultaneous TCP connections)~3,000 keep-alive connections (with 10,000 simultaneous TCP connections)~9,000 keep-alive connections (with 10,000 simultaneous TCP connections)

If the types don't fit, you can order a custom balancer type — file a ticket.

List of load balancer flavorings

The flavors are consistent by load balancer type and determine the number of vCPUs, RAM, and the number of balancer instances.

To create load balancers through the OpenStack CLI and Terraform IDs or flavor names are used. IDs differ in poolahs.

note

For example, ac18763b-1fc5-457d-9fa7-b0d339ffb336 — ID, a AMPH1.ACT_STNDB.4-2048 — name of the flavor that corresponds to the Advanced type with reservation in the ru-9 pool.

You can see list of load balancer flavorings in all pools tabulated or see the list of load balancer flavorings in a certain pool through the OpenStack CLI.

List of load balancer flavorings in all pools

IDName
d4490352-a58a-44b7-b226-717cd7607c0eAMPH1.SNGL.2-1024.
dbf2523f-39a5-4f34-be74-07eb3f111171AMPH1.ACT_STNDB.2-1024
ea49b7dd-c126-4b22-8a2c-2eb65cbda662AMPH1.ACT_STNDB.4-2048

Here:

  • ID — Load balancer flavor ID;
  • Имя — the name of the flavor that corresponds to load balancer type:
    • AMPH1.SNGL.2-1024 — type Basic without reservation;
    • AMPH1.ACT_STNDB.2-1024 — type Basic with reservation;
    • AMPH1.ACT_STNDB.4-2048 — type Advanced with reservation.

View a list of load balancer flavorings in a specific pool

  1. Open the OpenStack CLI.

  2. Check out the list of flavors:

    openstack loadbalancer flavor list -c id -c name

    Example answer for the ru-9 pool:

    +--------------------------------------+------------------------+
    | id | name |
    +--------------------------------------+------------------------+
    | 3265f75f-01eb-456d-9088-44b813d29a60 | AMPH1.SNGL.2-1024 |
    | d3b8898c-af94-47f8-9996-65b9c6aa95e2 | AMPH1.ACT_STNDB.2-1024 |
    | ac18763b-1fc5-457d-9fa7-b0d339ffb336 | AMPH1.ACT_STNDB.4-2048 |
    +--------------------------------------+------------------------+

    Here:

    • id — Load balancer flavor ID;
    • name — the name of the flavor that corresponds to load balancer type:
      • AMPH1.SNGL.2-1024 — type Basic without reservation;
      • AMPH1.ACT_STNDB.2-1024 — type Basic with reservation;
      • AMPH1.ACT_STNDB.4-2048 — type Advanced with reservation.

How a load balancer works

Схема работы балансировщика нагрузки
Схема работы балансировщика нагрузки

The load balancer uses a model OpenStack Octavia which includes:

  • Instance (amphora) — performs load balancing. Runs on a cloud server and uses HAProxy (High-Availability Proxy) — software to proxy traffic. In redundant load balancers (types Basic with redundancy and Advanced with redundancy) two instances are created, without redundancy — one;
  • target group (pool) — a group of servers to which the rule redirects requests using the protocol specified for the group;
  • servers (members) — servers that serve traffic in the pool. They are accessible by the IP address and port specified for the server within the target group;
  • accessibility checks (health monitor) — the process of checking the health of all servers in the target group;
  • rule (listener) — listens to the traffic flow coming to the load balancer, using the traffic flow specified in the rule protocols and ports. Then routes traffic to the required group of servers;
  • HTTP policy (L7 Policy) — Additional conditions in the rule for routing HTTP traffic with certain parameters.

Target groups

A target group is a group of servers to which traffic from a load balancer is distributed. A server can belong to several target groups of the same load balancer, if different ports are specified for the server in these groups.

You can customize for the target group:

Accessibility checks

You can enable availability checking for the target group. The balancer will monitor the status of the servers — if any server is down, the balancer will redirect the connection to another server.

Validation Parameters:

  • validation type. Depending on the protocol of the target group, the types available are:

    • TCP group — TCP, PING;
    • PROXY group — TLS-HELLO, HTTP, TCP, PING;
    • UDP group — UDP-CONNECT, PING;
    • HTTP group — HTTP, TCP, PING;
  • For the HTTP inspection protocol, you can configure the URL invocation and expected response codes;

  • check interval — the interval in seconds with which the balancer sends check requests to the servers;

  • connection timeout — the time to wait for a response;

  • success threshold — the number of successful requests in a row, after which the server is switched to the working state;

  • failure threshold — the number of unsuccessful accesses in a row, after which the server operation is suspended.

Rules

A rule is a balancer setting that serves a traffic flow with a specific port and protocol and distributes that traffic to the correct group of servers.

In the rule, you can customize:

The number of rules in the balancer is unlimited.

HTTP Policies

HTTP policy is an addition to the rule The following table describes how to route certain HTTP and HTTPS traffic separately from the rest of the traffic:

  • reassign target group (REDIRECT_TO_POOL);
  • direct to URL — completely replace the request URL, including protocol, domain name, path and request parameters (REDIRECT_TO_URL);
  • direct to URL prefix — replace the protocol and domain name in the request URL (REDIRECT_PREFIX);
  • reject (REJECT).

The request is redirected by the first matching policy. The order in which the policies are applied depends on the policy action: REJECT policies are applied first, followed by REDIRECT_TO_URL and REDIRECT_PREFIX, then REDIRECT_TO_POOL. If there are several policies with the same action in a rule, they are applied according to the position of the policy in the rule. You can change the order in which policies are applied.

HTTP policy consists of a set of conditions, the number of conditions in the policy is unlimited. For a request to fall under the policy, it must meet all the conditions of the policy. The condition specifies:

  • query parameter to check: HOST_NAME or PATH. When configuring a policy through the Openstack CLI, you can also create a condition on the parameters COOKIE, FILE_TYPE и HEADER;
  • control value to check whether it is an exact value or a regular expression;
  • type of match with the control value: EQUAL TO, STARTS WITH, ENDS WITH, CONTAINS, REGEX.

The number of HTTP policies in a rule is unlimited.

Load Balancer Ports

Load balancer instances utilize multiple ports:

  • inbound port (uplink). This is a virtual port that hosts a VIP virtual IP address. The rule listens to incoming traffic on this port. It is allocated when creating a load balancer and is located in its subnet. In load balancers with redundancy (types Basic with redundancy and Advanced with redundancy) VIP is reserved using VRRP protocol;
  • service VRRP ports. When creating a basic load balancer, one service port is allocated on its subnet. When creating a redundant load balancer, two service ports are allocated for the primary and backup instances, and VRRP is configured between them;
  • service ports (downlinks). If the servers are not in the balancer's subnet, ports for instances are allocated in subnets with servers when the balancer is created: one port for a basic balancer, two ports (primary and backup) for redundant balancers.

If the load balancer malfunctions, it automatically creates a new instance and only then deletes the old one — this requires a free port. If there is no free port, the load balancer will switch to the status of ERROR.

If you chose a public subnet as the load balancer subnet when you created the load balancer and will be hosting servers on it, make sure it has an additional IP address, or use a public network of /28 or larger.

We recommend choosing a private network with a public IP address (the address is needed to access the Internet) — in this case, a free IP address will always be available for recreating the instance. Traffic balancing will be performed inside the private network.

Protocols

Combinations of protocols are available:

  • TCP-TCP is classic L4 balancing;
  • TCP-PROXY — client information is not lost and is transmitted in a separate connection header;
  • UDP-UDP — The UDP protocol is faster than TCP, but less reliable;
  • HTTP-HTTP — L7-balancing;
  • HTTPS-HTTP — L7 balancing with encryption and SSL certificate termination on the balancer.

Query allocation algorithms

The rule distributes queries according to the selected algorithm. Two algorithms are available:

  • Round Robin is a round robin algorithm. The first request is passed to one server, the next request to another and so on until the last server is reached. Then the cycle starts again. Requests are distributed to servers according to the specified weight.
  • Least connections — the algorithm takes into account the number of connections to servers. A new request is passed to the server with the smallest number of active connections, the server weight is not taken into account.

Sticky Sessions

Additionally, Sticky Sessions can be enabled. The method is required when the end application keeps a long connection to each client and stores internal data state that is not synchronized between servers in the rule.

New requests will be distributed according to the selected algorithm, and then the session will be assigned to the server that started processing requests. All subsequent requests of this session will be distributed to the server without considering the selected algorithm. If the server is unavailable, the request will be redirected to another one.

You can customize the session definition settings — balancing sessions or balancing one client per server. You can identify the session:

  • by APP-cookie — an already existing cookie that is set in the application code;
  • by HTTP-cookie — a cookie that is created and attached to the session by the balancer;
  • by Source IP — the client's IP address is hashed and divided by the weight of each server in the target group — this is how the server that will process the requests is determined.

Connection settings

You can configure the connections that pass through the balancer — between incoming requests and the balancer, the balancer and the servers.

Connection Settings:

  • connection timeout — the time to wait for a response;
  • maximum connections — the maximum number of active connections;
  • inactivity timeout — the time during which the connection is considered active even if no data is transmitted;
  • TCP packets waiting timeout — the time during which the balancer waits for data transmission for inspection over an already established connection.

HTTP request headers

In normal mode of operation, the balancer passes only the original body of the HTTP request to the server, replacing the client's IP address with its own.

Include the necessary types of additional headers in the request so that servers receive this information for correct operation or analysis:

  • X-Forwarded-For — The IP address from which the request came;
  • X-Forwarded-Port — the port of the balancer to which the request came;
  • X-Forwarded-Proto — the original connection protocol;
  • X-SSL-Client-Verify — whether the client used a secure connection;
  • X-SSL-Client-Has-Cert — availability of a certificate from the client;
  • X-SSL-Client-DN — owner's identifying information;
  • X-SSL-Client-CN — The name of the host for which the certificate is issued;
  • X-SSL-Issuer — the certification authority where the certificate was issued;
  • X-SSL-Client-SHA1 — SHA1 fingerprint of the client certificate;
  • X-SSL-Client-Not-Before — the beginning of certificate validity;
  • X-SSL-Client-Not-After — certificate expiration.

Cost

Balancers are paid by cloud platform payment models.

The cost of balancers can be viewed at selectel.ru.