Project

General

Profile

Setting up systemd-nspawn VMs » History » Version 21

Peter Amstutz, 05/11/2025 02:27 AM

1 1 Brett Smith
h1. Setting up systemd-nspawn VMs
2
3
This page describes how to use systemd-nspawn to create VMs for development and testing. This page is a guide, *not* step-by-step instructions. *If you just copy+paste commands without actually reading the instructions, you will BREAK YOUR OWN NETWORKING and I will not be held responsible.*
4
5 5 Brett Smith
{{toc}}
6
7 1 Brett Smith
h2. One-time supervisor host setup
8
9
h3. Install systemd-nspawn and image build tools
10
11
<pre>sudo apt install systemd-container debootstrap
12
</pre>
13
14
@systemd-container@ packages systemd-nspawn and friends. @debootstrap@ is used to build VMs.
15
16
"Install Ansible":https://dev.arvados.org/projects/arvados/wiki/Hacking_prerequisites#Install-Ansible the same way we do for development. I'm fobbing you off to that page so you know what version of Ansible we're standardized on.
17
18
h3. Enable systemd network services
19
20
Unsurprisingly systemd-nspawn integrates well with other systemd components. The easiest way to get your VMs networked is to install systemd's network services:
21
22
<pre>sudo systemctl enable --now systemd-networkd systemd-resolved
23
</pre>
24
25
Note systemd-networkd only manages configured interfaces. On Debian the default configuration should play nice with NetworkManager. systemd-resolved and NetworkManager also cooperate.
26
27
If you refuse to do this, refer to the "Networking Options of systemd-nspawn":https://www.freedesktop.org/software/systemd/man/latest/systemd-nspawn.html#Networking%20Options to evaluate alternatives.
28
29
h3. NAT and firewall
30
31
systemd-networkd runs a DHCP server that provides private addresses to the virtual machines. You will need to configure your firewall to allow these DHCP requests, and to NAT traffic from those interfaces. These steps are specific to the host firewall; if yours isn't documented below, feel free to add it.
32
33
h4. ufw
34
35
For NAT, make sure these lines in @/etc/ufw/sysctl.conf@ are all set to @1@:
36
37
<pre>net/ipv4/ip_forward=1
38
net/ipv6/conf/default/forwarding=1
39
net/ipv6/conf/all/forwarding=1
40
</pre>
41
42
If you changed any, restart ufw. Then these are the rules you need:
43
44
<pre><code class="sh">for iface in vb-+ ve-+ vz-+; do
45
  sudo ufw rule  allow in on "$iface" proto udp to 0.0.0.0/0 port 67,68 comment "systemd-nspawn DHCP"
46
  sudo ufw route allow in on "$iface"
47
done
48
</code></pre>
49
50
h3. Filesystem
51
52
systemd-nspawn stores both images and containers under @/var/lib/machines@. It works with any filesystem, but if the filesystem is btrfs, it can optimize various operations with snapshots, etc. "Here's a blog post outlining some of the gains":https://idle.nprescott.com/2022/systemd-nspawn-and-btrfs.html.
53
54
I would recommend any deployment, and especially production deployments, have a btrfs filesystem at @/var/lib/machines@. Since this is likely to grow large, a dedicated partition is a good idea too.
55
56 3 Brett Smith
h3. Resolving VM names
57
58 4 Brett Smith
You can configure your host system to resolve the names of running VMs so you can easily SSH into them, open them in your browser, write them in Ansible inventories, etc. Edit @/etc/nsswitch.conf@, find the @hosts@ line, and make sure that @mymachines@ appears before any @dns@ or @resolve@ entries. See "nss-mymachines(2)":https://www.freedesktop.org/software/systemd/man/latest/nss-mymachines.html.
59 3 Brett Smith
60 15 Peter Amstutz
h3. Alternative configuration: virtual bridge with your local network
61
62 17 Peter Amstutz
You can create a "virtual bridge" that acts as an Ethernet switch for your containers and virtual machines.  This means your containers will get virtual ethernet devices with their own MAC addresses (generated by the Linux kernel) and that allow them to request their own IP addresses from your home/office router (the router doesn't need any configuration).  The nice thing about this is that it avoids the painful complexity of IP masquerading and NAT, and makes it much easier for other devices on your local network to access services running in the container.  The drawbacks are that your container is more exposed (since that's the point) and it may be harder to control over how IP addresses are assigned than a completely dedicated private network.
63 15 Peter Amstutz
64
Create the following file called @br0.xml@
65
66
<pre>
67
<network>
68
  <name>br0</name>
69
  <forward mode='bridge'/>
70
  <bridge name='br0'/>
71
</network>
72
</pre>
73
74
Then use @virsh@ to define the bridge network:
75
76
<pre>
77
virsh net-define br0.xml --validate 
78
virsh net-start br0 
79
systemctl restart libvirtd.service
80
</pre>
81
82
For DNS, I recommend using @mDNS@ (generally implemented with the @avahi@ daemon) and having it publish the hostname on the local network.  Edit @/etc/avahi/avahi-daemon.conf@
83
84
<pre>
85
[publish]
86
publish-workstation=yes
87
</pre>
88
89
Then all workstations with @mDNS@ clients will see the container or VM as <hostname>.local.
90
91 1 Brett Smith
h2. Build a systemd-nspawn container image
92
93 6 Brett Smith
The Arvados source includes an Ansible playbook to create an image from scratch with @debootstrap@. Write this variables file as @nspawn-image.yml@ and edit the values as you like:
94 1 Brett Smith
95 11 Peter Amstutz
<pre><code class="yaml">
96 12 Peter Amstutz
### Stuff you probably want to customize ###
97
# The name of the user account to create in the VM.  The default value is "admin".
98
#image_username: "admin"
99
100 18 Peter Amstutz
# A hash of the user's password. The default is no password.
101
# You need to do this or you won't be able to use 'sudo'.
102
#
103
# ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}"
104
#
105
# See also <https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module>
106
image_passhash: "!"
107
108 1 Brett Smith
# SSH public key string or URL which will be provisioned as an authorized key for 
109 18 Peter Amstutz
# the user account above.  You probably want this.
110 19 Peter Amstutz
#image_authorized_keys: "FIXME"
111 11 Peter Amstutz
112 12 Peter Amstutz
### Stuff you may want to customize. ###
113 1 Brett Smith
# The codename of the release to install.
114 11 Peter Amstutz
debootstrap_suite: bookworm
115
116
# The name of the image that will show up in "machinectl list-images" as well as
117
# "machinectl start" and "machinectl stop"
118 14 Peter Amstutz
# Default name is the distribution version being set up (e.g. debian-bookworm), but 
119 6 Brett Smith
# you can also call this whatever you want, like "my-arvados-test"
120
image_name: "debian-{{ debootstrap_suite }}"
121
122
# The mirror to install the release from.
123 13 Peter Amstutz
# The commented-out setting below is appropriate for Ubuntu.
124 6 Brett Smith
debootstrap_mirror: "http://deb.debian.org/debian"
125 1 Brett Smith
#debootstrap_mirror: "http://archive.ubuntu.com/ubuntu"
126 13 Peter Amstutz
127
### Additional user account customization ###
128 6 Brett Smith
# Other settings for the created user.
129 13 Peter Amstutz
#image_gecos: ""
130
#image_shell: /usr/bin/bash
131 1 Brett Smith
</code></pre>
132
133
With your Ansible virtualenv activated, run:
134
135 8 Peter Amstutz
<pre><code class="sh">ansible-playbook --ask-become-pass --extra-vars @nspawn-image.yml arvados/tools/ansible/build-debian-nspawn-vm.yml
136 1 Brett Smith
</code></pre>
137
138
If this succeeds, you have @/var/lib/machines/MACHINE@ with a base install and configuration.
139
140
h3. Consider Cloning
141
142
This is probably a good time to mention, you should think about these machine subdirectories more like VM disks rather than Docker images. If you simply boot your new VM and start making changes to it, those changes will be permanent. If you want an ephemeral VM you need to explicitly ask for that. Personally I prefer to never boot this bootstrapped VM directly, instead I run @machinectl clone BASE_NAME MACHINE@—then I treat @BASE_NAME@ like an "image" that I never touch, and @MACHINE@ more like a traditional stateful VM.
143
144
h2. Configure the VM
145
146
VMs are configured using the file at @/etc/systemd/nspawn/MACHINE.nspawn@. The defaults are pretty good and you don't have to write much. The main thing you'll want to do is tell it how to resolve DNS, and consider other networking:
147
148
<pre><code class="ini">[Exec]
149 21 Peter Amstutz
# If you're using a localhost DNS resolver like tailscale on dnsmasq, 
150
# resolv.conf will be a stub file, in which case you need the "real" resolv.conf
151
# which is the "uplink" one
152 1 Brett Smith
ResolvConf=bind-uplink
153
154
[Network]
155
# If you want multiple VMs to be able to talk to each other,
156
# put them all in the same zone:
157
#Zone=YOURZONE
158
159 16 Peter Amstutz
# If you set up a virtual bridge
160
#Bridge=br0
161
162 1 Brett Smith
[Files]
163
# If you want to make things on the host available in the VM,
164
# do that here:
165
Bind=/dev/fuse
166
#BindReadOnly=/home/YOU/SUBDIR
167
</code></pre>
168
169
Refer to "systemd.nspawn":https://www.freedesktop.org/software/systemd/man/latest/systemd.nspawn.html for all the options.
170
171
h2. Privilege a Container
172
173 10 Peter Amstutz
If you want to run FUSE, Docker, or Singularity inside your VM, that requires additional privileges. We have an Ansible playbook to automate that too. To grant privileges for all these services, with your Ansible virtualenv activated, run (@-e@ is the short version of @--extra-vars@):
174 9 Peter Amstutz
175 1 Brett Smith
<pre><code class="sh">ansible-playbook -e container_name=MACHINE arvados/tools/ansible/privilege-nspawn-vm.yml
176
</code></pre>
177
178
You can exclude some privileges by setting @SERVICE_privileges=absent@. For example, if you don't intend to run Singularity in this VM:
179
180
<pre><code class="sh">ansible-playbook -e "container_name=MACHINE singularity_privileges=absent" arvados/tools/ansible/privilege-nspawn-vm.yml
181
</code></pre>
182
183
See the comments at the top of source:tools/ansible/privilege-nspawn-vm.yml for details.
184
185
h2. Interacting with VMs
186
187
"machinectl":https://www.freedesktop.org/software/systemd/man/latest/machinectl.html is the primary command to interact with both containers and the underlying disk images:
188
189
<pre><code class="sh">machinectl start MACHINE
190
machinectl stop MACHINE
191
machinectl shell YOU@MACHINE
192
193
machinectl clone MACHINE1 MACHINE2
194 2 Brett Smith
machinectl remove MACHINE [MACHINE2 ...]
195
</code></pre>
196
197
Refer to the man page for full details. Note that running containers run under the <code>systemd-nspawn@MACHINE</code> systemd service, and you can interact with that with all the usual tools. (Try <code>journalctl -u systemd-nspawn@MACHINE</code>.)