Tags

incus start listmonk Error:   
Failed start validation for device "net0": MAC address "00:16:3e:7f:0d:73" already defined on another NIC

incus config device unset listmonk net0 hwaddr

incus shell web-test     
Error while executing alias expansion: incus exec web-test -- su -l 
Error: Instance not found

incus project list
incus project switch default
incus shell web-test

Incus: Debugging peertube4 Migration from bubuat1 (project raid1)

Goal

Migrate the Incus container from server (local) to server , into project , using incus copy --mode push.


Initial Situation

  • Source host: bubu
    • Incus 6.0.4
    • Container: peertube4 (converted from LXC via lxc-to-incus)
    • Storage: dir on /dev/md2 (≈ 94% used)
  • Target host: at1
    • Incus 6.0.4, clustered node 16C32T
    • Project: raid1 already exists and is the current project
    • Storage: dir + btrfs

Remote configuration on bubu:

incus remote list
# at1  → https://80.109.18.113:8443 (clustered Incus)

peertube4 on source:

incus list
incus info peertube4

Result: peertube4 is RUNNING, privileged, with root disk on pool default.


First Attempt: Scripted Migration

The migration was triggered via:

scripts/incus_remote_copy.sh peertube4

Relevant script log:

[DEBUG] Stopping LXC container peertube4
[DEBUG] Cleaning unsupported keys in /var/lib/lxc/peertube4/config
[DEBUG] Running lxc-to-incus for peertube4
Parsing LXC configuration
Checking for unsupported LXC configuration keys
Checking for existing containers
Skipping container 'peertube4': Container already exists
[DEBUG] Copying Incus container peertube4 to at1 project raid1
Error: Error transferring instance data: Failed migration on target: Failed getting migration target filesystem connection: context deadline exceeded
[INFO] Migration of peertube4 completed

Symptoms:

  • CLI prints Error: Error transferring instance data ... context deadline exceeded.
  • Immediately afterwards, it also prints [INFO] Migration of peertube4 completed → misleading, because the copy was not successful.

Confusion Between Remote and Project

Reason:

  • raid1 is an Incus project, not a remote.
    • For local at1 (when logged in on at1):

      incus --project raid1 list
      incus --project raid1 info peertube4
      
    • From another host using remote at1:

      incus --project raid1 list at1:
      incus --project raid1 info at1:peertube4
      

At this point, there was no peertube4 in project raid1 on at1.


Zombie Migration Operations on Source

On the source (bubu), checking operations showed:

incus operation list

Output (example):

| 5c231c05-46f1-4244-81a4-052867187b13 | TASK | Migrating instance | RUNNING | NO | 2025/12/05 22:51 CET |
| 5e43218b-a2be-449a-a3e5-e65effe8339f | TASK | Migrating instance | RUNNING | NO | 2025/12/06 06:18 CET |

Details:

incus operation show 5c231c05-46f1-4244-81a4-052867187b13

Snippet:

status: Running
metadata:
  fs_progress: 'peertube4: 92.68GB (11.82MB/s)'
err: ""

The instance could not be started:

incus start peertube4
# Error: Failed to create instance start operation: Instance is busy running a "migrate" operation

Root cause here:

  • Migration operation never properly finished/cleaned up.
  • Incus still considered peertube4 as being in a running migrate operation.

Fix: Restart Incus Daemon on Source

On bubu:

systemctl restart incus

incus operation list
# → no more "Migrating instance" entries

incus start peertube4
incus console peertube4 --show-log
# → container boots normally

Result: peertube4 is back to normal RUNNING state, no zombie operations.


Second Attempt: Manual incus copy --debug

Command on source (bubu):

incus copy --mode push peertube4 at1:peertube4 \
  --target-project raid1 \
  --debug

Key debug steps:

  1. Connect to local Incus via Unix socket.
  2. Connect to remote Incus at https://80.109.18.113:8443.
  3. Read source instance config and devices.
  4. On at1, create a target instance in project raid1 with source type migration.
  5. Start migration from source (migration": true, "live": true).

Final error in debug output:

Error: Failed instance migration: Unable to perform live container migration. CRIU isn't installed on the source server. To migrate the container, stop the container before migration or install CRIU on the source server

This is the true root cause of the migration failure.

  • The instance peertube4 was RUNNING.
  • Incus tried a live migration ("live": true).
  • Live container migration requires CRIU on the source.
  • CRIU is not installed on bubu → live migration fails.

Clean Fix: Offline (Cold) Migration Without CRIU

Given PeerTube is large and some downtime is acceptable, the simplest and most robust solution is to do an offline migration:

  1. Stop the container on the source:

    incus stop peertube4
    
  2. Run the migration again (cold copy):

    incus copy --mode push peertube4 at1:peertube4 \
      --target-project raid1 \
      --debug
    
    • Because peertube4 is stopped, Incus does a cold migration.
    • CRIU is no longer needed.
  3. Verify container on at1:

    On at1:

    incus --project raid1 list
    incus --project raid1 info peertube4
    incus --project raid1 start peertube4
    incus --project raid1 console peertube4 --show-log
    
  4. Application-level check:
    • Ensure Nginx / reverse proxy and DNS are updated to point to the new PeerTube.
    • Test via browser that PeerTube works as expected.

Only after verifying that the at1 instance is fully OK should the old instance on bubu be stopped and removed:

incus stop peertube4
incus delete peertube4

Alternative Fix: Enable Live Migration by Installing CRIU

If live container migration is desired as a general feature:

  1. Install CRIU on the source (``):

    apt update
    apt install criu
    
  2. Keep the container running and rerun the copy:

    incus copy --mode push peertube4 at1:peertube4 \
      --target-project raid1 \
      --debug
    

Notes:

  • Live migration introduces extra complexity (kernel+CRIU compatibility, large container state, etc.).
  • For big workloads like PeerTube, offline migration is usually simpler and less error-prone.

Quick Debug Checklist for Future Migrations

When an Incus migration fails:

  1. Check for zombie operations:
    • incus operation list
    • If stuck Migrating instance ops exist, try incus operation delete <id>.
    • If that doesn't work, restart the daemon: systemctl restart incusd.
  2. Verify source container state:
    • incus info <name>
    • incus start <name>
    • incus console <name> --show-log
  3. Double-check remote vs project syntax:
    • incus remote list
    • incus project list
    • Use --project <name> and remote prefix correctly.
  4. Run the copy with ``:
    • incus copy --mode push <ct> at1:<ct> --target-project raid1 --debug
    • Read the last lines carefully for the real error.
  5. If error mentions CRIU / live migration:
    • Either install CRIU on the source: apt install criu
    • Or stop the instance first: incus stop <ct> and rerun copy (cold migration).
  6. Verify the new instance on the target:
    • incus --project raid1 list
    • incus --project raid1 start <ct>
    • incus --project raid1 console <ct> --show-log
  7. Only then decommission the old instance.

Summary

  • The migration of peertube4 initially failed with a generic context deadline exceeded, but the real underlying cause was attempted live migration without CRIU on the source.
  • This also left zombie migration operations, which blocked starting the instance until the Incus daemon was restarted.
  • The robust, low-maintenance solution is to:
    • Stop the container before migration (cold copy).
    • Use --debug to see clear errors.
    • Keep the script from starting new migrations while old Migrating instance operations are still present.