MC-LAG redundant connection
MC-LAG (Multi-chassis link aggregation group) — Multi-chassis link aggregation. MC-LAG (Multi-chassis link aggregation group) is a multi-chassis link aggregation group. It reserves the connection to LAN and Internet access switches and increases the fault tolerance of the infrastructure. Only LAN connectivity can be reserved for off-the-shelf configuration servers. Redundancy is not available for all configurations.
MC-LAG can be configured can only be configured for servers that have a redundant NIC and MC-LAG in their configuration.
For servers with redundant MC-LAG connectivity, Servercore ensures that one of the access switches is always available, including during scheduled maintenance.
Principle of operation
The server is connected to two independent switches via a LAG (Link Aggregated Ethernet Channel). LACP 802.3ad protocol is used for connection and channel aggregation is configured on the server side. In this case, two links from the access switches to the server will be active at the same time.


Connection speed
For servers of arbitrary configuration:
- 1 Gbps — copper crossing is used for the connection;
- 10 Gbps — optical crossing is used for the connection;
- 25 Gbps — for LAN only, optical crossover is used for connection.
For off-the-shelf configuration servers:
- 10 Gbps — for LAN only, optical crossover is used for connection.
Cost
The cost of the MC-LAG redundant connection depends on the selected connection speed.
You can view the cost in the configurator on the site, or when selecting server components in the control panel.
Customize MC-LAG
- Make sure that the dedicated server configuration has a redundant NIC and MC-LAG added. If there is no redundant NIC, you can order a new redundant server or modify the components for a randomly configured server.
- Wait for the server readiness message from technical support. The switch ports will be bonded together.
- Configure link aggregation (LAG) on the server.
Configure channel aggregation on the server
Do not connect to the server on network interfaces that will be included in the aggregation. You will need to disconnect them during configuration.
Debian 9, 10 (lacp)
Ubuntu (netplan)
Windows Server 2019
Windows Server 2022
-
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.
-
Check that the bonding kernel module is installed on the server:
lsmod | grep bond
If there is no information in the response — the bonding kernel module is not installed.
-
If the bonding kernel module is not installed, install it:
sudo modprobe bonding
-
Install the package to manage and configure interfaces for parallel routing (bonding):
apt-get install ifenslave
-
Output the data about the network interfaces:
ifconfig -a
-
Consecutively shut down each network interface that will be included in the aggregation:
ifdown <eth_name>
Specify
<eth_name>
is the interface name. -
Open the
/etc/network/interfaces
file:nano /etc/network/interfaces
-
Bring the settings for the network interfaces that will be included in the aggregation to the following:
source /etc/network/interfaces
auto lo
iface lo inet loopback
auto <eth_name_1>
iface <eth_name_1> inet static
bond-master bond0
bond-primary <eth_name_1> <eth_name_2>
auto <eth_name_2>
iface <eth_name_2> inet manual
bond-master bond0
bond-primary <eth_name_1> <eth_name_2>
auto bond0
iface bond0 inet static
bond-slaves <eth_name_1> <eth_name_2>
bond-miimon 100
bond-mode 802.3ad
bond-downdelay 100
bond-updelay 100
bond-xmit-hash-policy layer3+4
address <ip_address>
netmask <mask>
gateway <gateway>
dns-nameservers <dns_servers>Specify:
<eth_name_1>
,<eth_name_2>
— the names of the network interfaces that are included in the aggregation;<ip_address>
— The IP address to use on the aggregated interface;<mask>
— subnet mask;<gateway>
— gateway;<dns_servers>
— DNS server address. We recommend using Servercore recursive DNS servers but you can specify any available DNS servers.
-
Bring up the bond0 network interface:
ifup bond0
-
Restart the network services:
/etc/init.d/networking start
-
Verify that the bond0 network interface is assembled correctly:
cat /proc/net/bonding/bond0
-
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.
-
Output information about the network interfaces:
ip link
-
Open the
/etc/netplan/01-netcfg.yaml
file:nano /etc/netplan/01-netcfg.yaml
-
Bring the settings for the network interfaces that will be included in the aggregation to the following:
network:
version: 2
renderer: networkd
ethernets:
<eth_name_1>:
dhcp4: false
<eth_name_2>:
dhcp4: false
bonds:
bond0:
addresses:
- <ip_address>/<mask>
gateway4: <gateway_4>
gateway6: <gateway_6>
interfaces:
- <eth_name_1>
- <eth_name_2>
# https://netplan.io/reference#properties-for-device-type-bonds
parameters:
mode: 802.3ad
lacp-rate: fast
transmit-hash-policy: layer3+4Specify:
<eth_name_1>
,<eth_name_2>
— the names of the network interfaces that are included in the aggregation;<ip_address>
— The IP address to use on the aggregated interface;<mask>
— subnet mask;<gateway_4>
,<gateway_6>
— gateway.
-
Apply the new configuration:
netplan --debug apply
-
Verify that the bond0 network interface is assembled correctly:
cat /proc/net/bonding/bond0
In Windows Server 2019, you can consolidate multiple network interfaces into a single logical interface using NIC Teaming.
Server Manager
PowerShell
-
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.
-
Start Server Manager.
-
Open the Local Server → Properties block.
-
Click NIC Teaming.
-
In the Servers block, select the server to configure.
-
In the Groups block, click Tasks and select New Team.
-
In the Team name field, enter the name of the group.
-
In the Member adapters box, check the network adapters that you want to add to the group.
-
In the Teaming mode field, select — LACP.
-
In the Load balancing mode field, select the load balancing algorithm.
-
Optional: In the Primary team interface field, enter the VLAN ID for the team interface if it is used on a private network and you have Q-in-Q enabled . Do not use the VLAN ID for the public network interface.
-
Click OK.
-
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.
-
Run PowerShell as an administrator.
-
Create a VMSwitch:
New-NetLbfoTeam -Name <group_name> -TeamMembers "<eth_name_1>","<eth_name_2>" -TeamingMode <teaming_mode> -LoadBalancingAlgorithm <algorithm>
Specify:
<group_name>
— group name;<eth_name_1>
,<eth_name_2>
— The names of the interfaces that you want to add to the group;<teaming_mode>
— channel aggregation mode;<algorithm>
— load balancing algorithm.
Starting with Windows Server 2022, NIC Teaming technology is replaced by Switch Embedded Teaming (SET).SET can only be configured when creating a Hyper-V virtual switch.
-
Connect to the server on a network interface that will not be included in the aggregation, or through a KVM console.
-
Run PowerShell as an administrator.
-
Create a VMSwitch:
New-VMSwitch -Name <switch_name> -NetAdapterName "<eth_name_1>","<eth_name_2>" -EnableEmbeddedTeaming $true
Specify:
<switch_name>
— name of the virtual switch;<eth_name_1>
,<eth_name_2>
— The names of the interfaces that you want to add to the group.