MC-LAG (Multi-chassis link aggregation group) - Multi-chassis link aggregation. MC-LAG (Multi-chassis link aggregation group) is a multi-chassis link aggregation group. It reserves the connection to LAN and Internet access switches and increases the fault tolerance of the infrastructure. Only LAN connectivity can be reserved for off-the-shelf configuration servers. Redundancy is not available for all configurations.
MC-LAG can be configured can only be configured for servers that have a redundant NIC and MC-LAG in their configuration.
For servers with redundant MC-LAG connectivity, Selectel ensures that one of the access switches is always available, including during scheduled maintenance.
Principle of operation
The server is connected to two independent switches via a LAG (Link Aggregated Ethernet Channel). LACP 802.3ad protocol is used for connection and channel aggregation is configured on the server side. In this case, two links from the access switches to the server will be active simultaneously.
10 Gbit/s - for public network, optical crossover is used for connection;
10 Gbit/s - for local network, optical crossover is used for connection;
25 Gbit/s - for local network, optical crossover is used for connection.
Cost
The cost of the MC-LAG redundant connection depends on the selected connection speed.
You can view the cost in the configurator on the site, or when selecting server components in the control panel.
Customize MC-LAG
Make sure that the dedicated server configuration has a redundant NIC and MC-LAG added. If there is no redundant NIC, you can order a new redundant server or modify the components for a randomly configured server.
Wait for the server readiness message from technical support. The switch ports will be bonded together.
<eth_name_1>, <eth_name_2> - the names of the network interfaces that are included in the aggregation;
<ip_address> - The IP address to use on the aggregated interface;
<mask> - subnet mask;
<gateway_4>, <gateway_6> - gateway.
Apply the new configuration:
netplan --debug apply
Verify that the bond0 network interface is assembled correctly:
cat /proc/net/bonding/bond0
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.
Check that the bonding kernel module is installed on the server:
lsmod |grep bond
If there is no information in the response - the bonding kernel module is not installed.
If the bonding kernel module is not installed, install it:
sudo modprobe bonding
Install the package to manage and configure interfaces for parallel routing (bonding):
apt-getinstall ifenslave
Output the data about the network interfaces:
ifconfig-a
Consecutively shut down each network interface that will be included in the aggregation:
ifdown<eth_name>
Specify <eth_name> is the interface name you obtained in step 5.
Open the /etc/network/interfaces.d/50-cloud-init file:
nano /etc/network/interfaces.d/50-cloud-init
Bring the settings for the network interfaces that will be included in the aggregation to the following:
auto lo
iface lo inet loopback
auto <eth_name_1>
iface <eth_name_1> inet static
bond-master bond0
bond-primary <eth_name_1> <eth_name_2>
auto <eth_name_2>
iface <eth_name_2> inet manual
bond-master bond0
bond-primary <eth_name_1> <eth_name_2>
auto bond0
iface bond0 inet static
bond-slaves <eth_name_1> <eth_name_2>
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 100
bond-updelay 100
bond-xmit-hash-policy layer3+4
address <ip_address>
netmask <mask>
gateway <gateway>
dns-nameservers <dns_servers>
Specify:
<eth_name_1>, <eth_name_2> - the names of the network interfaces that are included in the aggregation;
<ip_address> - The IP address to use on the aggregated interface;
<mask> - subnet mask;
<gateway> - gateway;
<dns_servers> - DNS server address. We recommend using Selectel recursive DNS servers but you can specify any available DNS servers.
Apply the network configuration changes:
source /etc/network/interfaces.d/50-cloud-init
Bring up the bond0 network interface:
ifup bond0
Restart the network services:
/etc/init.d/networking start
Verify that the bond0 network interface is assembled correctly:
cat /proc/net/bonding/bond0
In Windows Server 2019, you can consolidate multiple network interfaces into a single logical interface using NIC Teaming.
Server Manager
PowerShell
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.
Start Server Manager.
Open the Local Server → Properties block.
Click NIC Teaming.
In the Servers block, select the server to configure.
In the Groups block, click Tasks and select New Team.
In the Team name field, enter the name of the group.
In the Member adapters box, check the network adapters that you want to add to the group.
In the Teaming mode field, select - LACP.
In the Load balancing mode field, select the load balancing algorithm.
Optional: In the Primary team interface field, enter the VLAN ID for the team interface if it is used on a private network and you have Q-in-Q enabled . Do not use the VLAN ID for the public network interface.
Starting with Windows Server 2022, NIC Teaming technology is replaced by Switch Embedded Teaming (SET). SET can only be configured when creating a Hyper-V virtual switch.
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.