Upgrade from 1.5 to latest failed with Whooops: [('/opt/vyatta/etc/config... [Errno 6] No such device

Aloha,
today I tried to update all my vyos router.
1 out of 5 failed.
unfortunately I dont understand why.

I receive such errors:

What would you like to name this image? (Default: 1.5-rolling-202402031033)
Would you like to set the new image as the default one for boot? [Y/n] Y
An active configuration was found. Would you like to copy it to the new image? [Y/n] Y
Copying configuration directory
Cleaning up
Unmounting target filesystems
Removing temporary files
Whooops: [(‘/opt/vyatta/etc/config/containers/storage/vfs-containers/07b695870e72eb6298c2781eaf8de10f49c08e05ccf6f92286181f95c0018fcd/userdata/attach’, ‘/usr/lib/live/mount/persistence/boot/1.5-rolling-202402031033/rw/opt/vyatta/etc/config/containers/storage/vfs-containers/07b695870e72eb6298c2781eaf8de10f49c08e05ccf6f92286181f95c0018fcd/userdata/attach’, "[Errno 6] No such device or address:

root@HUB01:/# du -hd1
1.5G ./boot
614M ./config
84K ./dev
4.6M ./etc
112K ./home
0 ./media
4.0K ./mnt
610M ./opt

0 ./proc
18K ./root
1.8M ./run
0 ./srv
du: WARNING: Circular directory structure.
This almost certainly means that you have a corrupted file system.
NOTIFY YOUR SYSTEM MANAGER.
The following directory is part of the cycle:
./sys/kernel/debug/pinctrl
0 ./sys
8.0K ./tmp
9.3G ./usr
418M ./var
13G .

root@HUB01:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 394M 1.8M 393M 1% /run
/dev/sda1 40G 6.7G 31G 18% /usr/lib/live/mount/persistence
/dev/loop0 370M 370M 0 100% /usr/lib/live/mount/rootfs/1.5-rolling-202312010026.squashfs
tmpfs 2.0G 0 2.0G 0% /usr/lib/live/mount/overlay
overlay 40G 6.7G 31G 18% /
tmpfs 2.0G 84K 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 8.0K 2.0G 1% /tmp
tmpfs 2.0G 248K 2.0G 1% /var/tmp
none 2.0G 0 2.0G 0% /etc/cni/net.d
none 2.0G 1.2M 2.0G 1% /opt/vyatta/config
tmpfs 394M 0 394M 0% /run/user/1003
shm 64M 0 64M 0% /usr/lib/live/mount/persistence/container/storage/overlay-containers/c1a0b9982d5cda7814a0163ba545063dfe33cae671b6135c543befb85dc258de/userdata/shm
fuse-overlayfs 40G 6.7G 31G 18% /usr/lib/live/mount/persistence/container/storage/overlay/e0f956f078b0a6e3f2eb13ddf46f5096290f057fd8b03aa676c8288db0b33c66/merged

root@HUB01:/# sudo find /opt/vyatta/ -size +10M
/opt/vyatta/etc/config/containers/storage/vfs/dir/4b83135d56c35d16ae426d2e153303516ed14cdd9d45cc8804ca0a2bd172dd00/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/cbec87dbf717d357bf5a3c1fef54c83a4de76228b40824f0b61d149fd3da7273/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/e05d7d54ecf19671fb6dcdb44889e89f46ca483c26152f97d4621ac7bb5c9bc3/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/1374bf91cda035593ebc8d55c437228683437ca553eaad9f7c9b6f397a711d9b/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/e82c086397b406f24f0b9e3e2c6d561112bfe4314a337859483d0ee84f3b20e9/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/309139a0cc37c05b8b503ec1382bec860d6071c992e898e077b11e8f0c0eb62b/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/aa0eaf7235690f0de68442e114f7bc8e00833ea9169ce91492e87eb843626705/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/6c137cbb02c58db7f900689816bbc8645b5ffafac5570a9812089aab8c2cc89b/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/71d0e5cd27737d0ed1dc7341336f31c9cb0223ef36c904ffcd0f6181cc3c8cfc/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/53604296d7bb16742877b9be047f448648e76f4b74cc63d088183bca678b8b1d/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/4961da30458746e64fcb833f9a288ec65ea8cd04f65f11331c14562c18aabbc1/opt/adguardhome/AdGuardHome
/opt/vyatta/etc/config/containers/storage/vfs/dir/e6d0d3984b66b47830e29990072e19263c6eef0d00006ee80ab5edc049fa3292/opt/adguardhome/AdGuardHome

Is there any possiblity to fix that?

Thanks alot for any suggestions

Cheers
Marcel

Looks like related to additional containers who are installed.

How does your config look like?

show config commands | strip-private

Also how many concurrent installs do you have?

show install

As the config is quite long (BGP peers and Prefix-list,)
I ommit the full config for now.

I have container running, here’s the config it:

set container name adguard allow-host-networks
set container name adguard cap-add 'net-bind-service'
set container name adguard image 'docker.io/adguard/adguardhome:latest'
set container name adguard port admin-80 destination '80'
set container name adguard port admin-80 listen-address 'xxx.xxx.222.222'
set container name adguard port admin-80 protocol 'tcp'
set container name adguard port admin-80 source '80'
set container name adguard port admin-443 destination '443'
set container name adguard port admin-443 listen-address 'xxx.xxx.222.222'
set container name adguard port admin-443 protocol 'tcp'
set container name adguard port admin-443 source '443'
set container name adguard port udp-dns destination '53'
set container name adguard port udp-dns listen-address 'xxx.xxx.222.222'
set container name adguard port udp-dns protocol 'udp'
set container name adguard port udp-dns source '53'
set container name adguard restart 'on-failure'
set container name adguard volume conf destination '/opt/adguardhome/conf'
set container name adguard volume conf source '/config/adguard/conf'
set container name adguard volume work destination '/opt/adguardhome/work'
set container name adguard volume work source '/config/adguard/work'

The show install command doesnt work / exist:

@router1:~$ show i
interfaces   ip           ipoe-server  ipv6         isis

Thanks
Marcel

Sorry, instead of “show install” I was thinking of:

show system image

Regarding containers would it be possible (as troubleshooting) to backup all your containers and then remove them one by one to see if its a particular container or any container who causes this error during upgrade?

That is first backup everything.

Then remove one and attempt upgrade.

If still error then remove another one and attempt upgrade.

Loop until no more containers exists (in case error didnt go away before all containers are removed).

Hi,
I was also thinking, to delte the container and see what happens.
Luckily its only one container, and I have the same config for that on all my vyos routers as it is Adguard DNS filter.

Will try that out!

I dont have any other images, deleted them already.

sh system image
Name                      Default boot    Running
------------------------  --------------  ---------
1.5-rolling-202312010026  Yes             Yes

Good ideas!
Will try it out and report back!

Thanks a lot
Cheers
Marcel

Aloha,
so my container is uninstalled, but that didnt help
:slight_smile:
But maybe someone knows, what contains this directory:

/config/containers# ls -la storage/vfs/dir/
total 88
drwxrwxr-x 22 root vyattacfg 4096 Dec 17  2022 .
drwxrwxr-x  3 root vyattacfg 4096 Feb 10  2022 ..
drwxrwxr-x 19 root vyattacfg 4096 Aug  4  2022 1374bf91cda035593ebc8d55c437228683437ca553eaad9f7c9b6f397a711d9b
drwxrwxr-x 19 root vyattacfg 4096 Dec 17  2022 309139a0cc37c05b8b503ec1382bec860d6071c992e898e077b11e8f0c0eb62b
drwxrwxr-x 19 root vyattacfg 4096 Dec 17  2022 33d875b9d77e349409356ac07bfd69386bdc011014acc3d51c6222923f50459c
drwxrwxr-x 19 root vyattacfg 4096 Feb 10  2022 4961da30458746e64fcb833f9a288ec65ea8cd04f65f11331c14562c18aabbc1
/config/containers# ls -la storage/
total 44
drwxrwxr-x 10 root vyattacfg 4096 Feb 10  2022 .
drwxrwxr-x  3 root vyattacfg 4096 Feb 10  2022 ..
drwxrwxr-x  2 root vyattacfg 4096 Feb 10  2022 cache
drwxrwxr-x  2 root vyattacfg 4096 Feb 10  2022 libpod
drwxrwxr-x  2 root vyattacfg 4096 Feb 10  2022 mounts
-rwxrwxr-x  1 root vyattacfg   64 Dec 17  2022 storage.lock
drwxrwxr-x  2 root vyattacfg 4096 Feb 10  2022 tmp
-rwxrwxr-x  1 root vyattacfg    0 Feb 10  2022 userns.lock
drwxrwxr-x  3 root vyattacfg 4096 Feb 10  2022 vfs
drwxrwxr-x  3 root vyattacfg 4096 Dec 17  2022 vfs-containers
drwxrwxr-x  6 root vyattacfg 4096 Dec 17  2022 vfs-images
drwxrwxr-x  2 root vyattacfg 4096 Dec 17  2022 vfs-layers

do we need this or can I happy delete all of it?
:smiley:
I dont have any container anymore running.

So,
I just removed the container config
and deleted everything from:

/config/containers/storage

Then upgraded, rebooted, and added the container config and image again.
Now it works!

Thanks a lot for help!

1 Like