Interested in LXD and LXC?  Check out our new LXC/LXD Deep Dive course here at Linux Academy!  We cover topics including installation, launching containers, persistent storage, networking, and even cover some fascinating use cases to make LXC useful and relevant to you right now!

We’ve also created this LXC-LXD Cheat Sheet to help you get started with LXD right away!


LXD is a really fun and easy way to jump into containers, especially if you have some experience with virtual machines.  LXD is designed to create machine containers, which strongly resemble virtual machines, so trying out new distributions or testing application deployments is easy and – dare I say it – fun.  LXD 2.0 brought myriad new features to the platform, but a few tasks remain adorably unfledged. A single node of LXD, for instance, can easily be initialized and containers come up on their own private network with the default settings.  Once a second node is added, a major limitation becomes obvious:  Each node has its own private network for containers and, without some networking jujitsu, will never be able to communicate with one another.

Getting all those containers on the same layer 2 network, regardless of which host each one resides on, is what this post is all about.  

A Modest Proposal

There are probably as many ways to solve this problem as there are network engineers.  I sought a solution that would be simple to explain, can be implemented on many disparate network architectures, and offers the same features as single node LXD.  So, it needed to retain name resolution across nodes and network address translation without the complication of segmenting the container network.  After rolling up my sleeves and getting most of this working, I started to realize that my initial solution was neither simple nor easy to explain, which is probably why I could not find much guidance on the internet for getting this done.  Then I realized that I didn’t need to manually recreate the infrastructure that lxd init gives me on the initial node – I just needed to extend its reach.

My solution involves first initializing LXD on a master node using the default settings (giving it the added benefit of keeping an existing LXD node with all its images, containers, and so-forth intact if necessary).  Next, bring up a secondary node and building a GRE tunnel from a generic bridge on the new node to the lxdbr0 on the existing node.  Finally, LXD is initialized on the second node using the new bridge instead of the built-in lxdbr0.  Network address translation works, container/hostname resolution works, and containers retain their IP addresses even after being transferred from one host to another.  

Making It Work

If you’ve already got an LXD node up and running, you can skip the initial part of these instructions.  You will still need to make a few changes on your primary node, but you shouldn’t need to reinstall or reinitialize LXD.  The environment consists of two Ubuntu 16.04 hosts, each on a network connected to the internet.  They need not be on the same network, but they do need to be able to access one another through a GRE tunnel, so some firewalls or security rules may thwart your attempts, though I didn’t have any problems in my testing in my home lab or on Linux Academy’s Cloud Servers.  “Alpha” is the host that will be acting as the master server.  This is the one that might have an LXD installation up and running already.  “Bravo” is the host that we’ll be adding to our cluster.  I’m assuming LXD has not been initialized there before.  Before beginning, you should note the IP addresses of Alpha and Bravo.  In the instructions, I’ll refer to these IP addresses as if they were set as environment variables $ALPHA_IP and $BRAVO_IP.

    1. Install some tools on Alpha.  We’re going to be building a new tunnel, so we should make certain the proper bridge utilities are installed:
      sudo apt-get install bridge-utils
    2. (Optional) If LXD isn’t initialized on Alpha, go ahead and do that now.  Let LXD handle the creation of the lxdbr0, as we’re going to use the services it manages for the entire cluster.  Accepting the defaults is fine for our purposes if you’re unsure of precisely what you want.
      sudo lxd init
    3. On Alpha, we need to start building our GRE tunnel.  This is done on each end by defining the link, plugging the link into the appropriate bridge, and then bringing the link up:
      sudo ip link add contgre type gretap remote $BRAVO_IP local $ALPHA_IP ttl 255
      sudo brctl addif lxdbr0 contgre
      sudo ip link set contgre up
    4. On Bravo, we need to complete the link.  First, we’ll create a new bridge for our containers residing on Bravo to use; next, we’ll set up the link in a manner similar to how we set it up on Alpha.  Once these steps are completed, it’s like we have two network switches connected with an ethernet cable.  One switch on each host for local containers, and network services like NAT, DHCP, and DNS running on Alpha and managed by LXD.
      sudo apt-get install bridge-utils
      sudo brctl addbr multibr0
      sudo ip link add contgre type gretap remote $ALPHA_IP local $BRAVO_IP ttl 255
      sudo brctl addif multibr0 contgre
      sudo ip link set contgre up
    5. On Bravo, we are now ready to run lxd init.  Answer most of the questions with the defaults or what you need or want for your LXD cluster.  The important questions come when the screen turns pink:

LXC Install Question
LXC Install Question
LXC Install Question

  1. Once Bravo is configured, if you made it available over the net, you can add it as a remote on Alpha so you can control Bravo from a single host:
    lxc remote add bravo $BRAVO_IP --password=password_you_chose
  2. Things are all set up, and the rest of these steps will just allow you to test and exercise mutli-node LXD from the Alpha host.
    lxc launch images:alpine/3.5 test1
    lxc launch images:alpine/3.5 bravo:test2
    lxc list
    lxc list bravo:
    lxc stop bravo:test2
    lxc move bravo:test2 local:
    lxc start test2
    lxc list
    lxc list bravo:

Hopefully, this guide has given you some insight in how to make LXD a bit more usable in larger lab environments.  For more information about LXD, I invite you to take a look at my Linux Academy course LXC/LXD Deep Dive, where we explore how to run LXD in your environment and examine many potential use cases.

DevOps Guides

Get actionable training and tech advice

We'll email you our latest articles up to once per week.