Skip to content

Link Aggregation

amirius edited this page Nov 29, 2017 · 5 revisions
Table of Contents
  1. Link Aggregation in Linux
  2. Bond Device Configuration
  3. Team Device Configuration
  4. Link Aggregation and Bridge
  5. Further Resources

Link Aggregation in Linux

There are two implementations of link aggregation (LAG) in Linux:

Team is a much newer implementation with a clear and modular design. It is recommended for new installations. For more information about the team device please refer to this Infrastructure Specification.

Bond Device Configuration

To create a bond device, run:

$ ip link add name bond0 type bond

Different operation modes can be used with both bond and team devices. However, currently only LACP mode is supported in the ASIC. To set the bond device to LACP (802.3ad) mode, run:

$ ip link set dev bond0 type bond mode 802.3ad

As with the bridge device, enslaving a port netdev to the bond device is performed using the following commands:

$ ip link set dev sw1p5 master bond0
$ ip link set dev sw1p6 master bond0
$ ip link set dev bond0 up

To remove a port netdev from a bond, run:

$ ip link set dev sw1p5 nomaster

And to delete the bond device, run:

$ ip link del dev bond0

Note: Enslaving a port netdev which already has a VLAN device as an upper device to a LAG device (either bond or team) is not supported.

Team Device Configuration

To create a team device in LACP mode, run:

$ teamd -t team0 -d -c '{"runner": {"name": "lacp"}}'

To enslave port netdevs to the team device, run:

$ ip link set dev sw1p5 master team0
$ ip link set dev sw1p6 master team0
$ ip link set dev team0 up

To remove a port netdev from a team device, run:

$ ip link set dev sw1p5 nomaster

And to delete the team device, run:

$ teamd -t team0 -k

Link Aggregation and Bridge

A typical use case for switches is to bridge LAG devices together or with other switch ports.

Assuming we have the following topology:

+--------------+    +--------------------+    +--------------+
|              |    |       switch       |    |              |
|         eth0--------sw1p3        sw1p4--------eth0         |
|   hostA      |    |                    |    |     hostB    |
|         eth1--------sw1p5        sw1p6--------eth1         |
|              |    |                    |    |              |
+--------------+    +--------------------+    +--------------+

To allow hostA and hostB to communicate with each other using LAG over the parallel links, run:

hostA$ teamd -t team0 -d -c '{"runner": {"name": "lacp"}}'
hostA$ ip link set eth0 master team0
hostA$ ip link set eth1 master team0
hostA$ ip link set team0 up
hostA$ ip address add 192.168.1.101/24 dev team0

hostB$ teamd -t team0 -d -c '{"runner": {"name": "lacp"}}'
hostB$ ip link set eth0 master team0
hostB$ ip link set eth1 master team0
hostB$ ip link set team0 up
hostB$ ip address add 192.168.1.102/24 dev team0

switch$ teamd -t team0 -d -c '{"runner": {"name": "lacp"}}'
switch$ ip link set sw1p3 master team0
switch$ ip link set sw1p5 master team0
switch$ ip link set team0 up
switch$ teamd -t team1 -d -c '{"runner": {"name": "lacp"}}'
switch$ ip link set sw1p4 master team1
switch$ ip link set sw1p6 master team1
switch$ ip link set team1 up
switch$ ip link add name br0 type bridge
switch$ ip link set dev br0 type bridge vlan_filtering 1
switch$ ip link set team0 master br0
switch$ ip link set team1 master br0
switch$ ip link set br0 up

To display the state of LAG, run:

$ teamdctl team0 state
setup:
  runner: lacp
ports:
  sw1p5
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
    runner:
      aggregator ID: 174, Selected
      selected: yes
      state: current
  sw1p3
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
    runner:
      aggregator ID: 174, Selected
      selected: yes
      state: current
runner:
  active: yes
  fast rate: no

Since team0 and team1 are like any other switch port, it is possible to configure VLAN devices on top of them and bridge them together in a VLAN-unaware bridge:

hostA$ ip link add link team0 name team0.10 type vlan id 10
hostA$ ip link set dev team0.10 up
hostA$ ip address add 192.168.2.101/24 dev team0.10

hostB$ ip link add link team0 name team0.20 type vlan id 20
hostB$ ip link set dev team0.20 up
hostB$ ip address add 192.168.2.102/24 dev team0.20

switch$ ip link add link team0 name team0.10 type vlan id 10
switch$ ip link set dev team0.10 up
switch$ ip link add link team1 name team1.20 type vlan id 20
switch$ ip link set dev team1.20 up
switch$ ip link add name br1 type bridge
switch$ ip link set dev team0.10 master br1
switch$ ip link set dev team1.20 master br1
switch$ ip link set dev br1 up

team0 and team1 are also like any other bridge port, so different bridge port attributes can be configured for them.

switch$ bridge link set dev team0 learning off flood off

For more information about these attributes please refer to the Bridge document.

Further Resources

  1. man ip
  2. man teamd
  3. man teamdctl
Clone this wiki locally