Tags

A comprehensive guide for system administrators, self‑hosters, and advanced Linux users who want to migrate from classic LXC to Incus. The process ensures reliable container management, persistent networking, and smooth integration with storage and firewall systems.

Incus is the modern evolution of LXC, offering an advanced management API, stronger security isolation, and extended functionality for containers and virtual machines. Migrating to Incus allows users to retain their existing workloads while gaining long‑term support and active development.


🧩 1. Prerequisites

Before starting the migration, ensure that:

  • The host runs Debian 13 (Trixie) or newer.
  • The LXC service is active (systemctl status lxc.service).
  • Incus is installed and running (systemctl status incus.service).
  • A working network bridge exists (e.g. my‑bridge).
  • A storage pool (e.g. raid1pool) is configured.

🔍 2. Check the Source Container

List existing LXC containers:

lxc-ls -f

Stop the container to migrate (example: jitsi):

lxc-stop -n jitsi || true

⚙️ 3. Prepare the Incus Environment

Switch or create the target project:

incus project switch raid1 || incus project create raid1 && incus project switch raid1

Verify pools and networks:

incus storage list
incus network list

Ensure the project’s default profile points to the correct bridge and storage pool:

incus profile show default --project raid1

Expected structure:

devices:
  eth0:
    name: eth0
    network: my-bridge
    type: nic
  root:
    path: /
    pool: raid1pool
    type: disk

🚚 4. Run the Migration

Tip: During migration, warnings like “File exists” or unsupported key may appear. These typically indicate duplicate interfaces or outdated settings. Clean up the configuration and rerun the migration.

Perform a dry‑run first:

lxc-to-incus --dry-run --containers jitsi

Then execute the real migration:

lxc-to-incus --storage raid1pool --containers jitsi

This imports the container into Incus under:

/var/lib/incus/storage-pools/raid1pool/containers/jitsi/

✅ 5. Verify Migration Results

List containers across all projects:

incus list --all-projects

If the container appears in default, move it:

incus move jitsi --project default --target-project raid1

🌐 6. Fix Networking

Goal: Ensure the migrated container connects correctly to the my‑bridge network and receives a valid IP via DHCP.

Stop and clean network devices:

incus stop jitsi
incus config device remove jitsi eth0 2>/dev/null || true
incus config device remove jitsi net0 2>/dev/null || true

Add the correct NIC:

incus config device add jitsi eth0 nic network=my-bridge name=eth0

Clear old volatile keys:

for k in volatile.eth0.hwaddr volatile.net0.name volatile.last_state.idmap volatile.last_state.power volatile.last_state.ready; do
  incus config unset jitsi "$k" 2>/dev/null || true
done

Start and verify:

incus start jitsi
incus exec jitsi -- ip -br a

If no IP is shown:

incus exec jitsi -- dhclient -v eth0

📡 7. Check DNSMASQ Lease

Incus assigns persistent leases via its internal dnsmasq.

cat /var/lib/incus/networks/my-bridge/dnsmasq.hosts/raid1_jitsi.eth0

Example:

10:66:6a:e4:68:ae,jitsi

This confirms the container’s IP will remain constant across reboots.


🔥 8. Update FireHOL Rules

These ports are used by Jitsi Meet:

  • UDP 10000 – Media streams
  • UDP 3478 – STUN/TURN
  • TCP 5349 – Secure TURN over TLS

Adapt as needed for other applications.

Add DNAT rules:

dnat4 to 10.1.223.176 inface "${wan}" proto udp dport 10000
dnat4 to 10.1.223.176 inface "${wan}" proto udp dport 3478
dnat4 to 10.1.223.176 inface "${wan}" proto tcp dport 5349

Reload and verify:

firehol reload
firehol status | grep 10.1.223.176

🧹 9. Clean Up Legacy LXC

After confirming success:

rm -rf /var/lib/lxc/jitsi/

Then verify only Incus manages it:

incus list

🌉 10. Router Configuration

For FireHOL routing between bridge and WAN:

router4 mybridge inface my-bridge outface "${wan}"
    masquerade
    server "http" accept
    client all accept
    route all accept

This enables Incus containers to reach the internet and accept incoming HTTP connections.


💾 11. Btrfs Maintenance Tasks

If your Incus storage pool uses Btrfs, perform regular maintenance to ensure data integrity and performance:

🧱 Subvolume Management

List subvolumes for verification:

btrfs subvolume list /var/lib/incus/storage-pools/raid1pool/containers

🧼 Balancing and Cleanup

Run a light balance every few weeks to optimize chunk usage:

btrfs balance start -dusage=75 -musage=75 /var/lib/incus/storage-pools/raid1pool

🧩 Scrubbing

Verify data integrity monthly:

btrfs scrub start -Bd /var/lib/incus/storage-pools/raid1pool

Monitor progress:

btrfs scrub status /var/lib/incus/storage-pools/raid1pool

🪣 Quotas and Space Usage

Enable and check quotas:

btrfs quota enable /var/lib/incus/storage-pools/raid1pool
btrfs qgroup show -rF /var/lib/incus/storage-pools/raid1pool

Regular Btrfs maintenance keeps the Incus container storage efficient and resilient.


🎯 Final Verification

After completing all steps, test network, storage, and application functionality. Confirm:

  • Persistent IP and hostname
  • Working DNS and routing
  • External access via configured DNAT

Migrating to Incus improves container lifecycle management, network isolation, and resource control compared to legacy LXC.

Benefits:

  • Unified REST API and CLI tooling
  • Project and profile isolation
  • Modern storage and network backends
  • Active community and long‑term maintenance

📖 References