Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Rebuild of Dashy in ProxMox cluster v 8.0.4 / LinuxTurnkey / docker with ZFS storage is crashing. #1337

Closed
5 tasks done
LuxBibi opened this issue Oct 1, 2023 · 34 comments
Assignees
Labels
🐛 Bug [ISSUE] Ticket describing something that isn't working

Comments

@LuxBibi
Copy link

LuxBibi commented Oct 1, 2023

Environment

Self-Hosted (Docker)

System

ProxMox Ver 8.0.4 / 3 node Cluster - LXC - Debian GNU/Linux 11 (bullseye) - docker Version: 24.0.5

Version

Dash version 2.1.1

Describe the problem

As I start a rebuild of Dashy Dashboard via icon/menu; (as I changed some *.yml files)
-> Config -> Update Configuratipon -> Rebuild Application
the rebuild process is "crashing" and Dashy is no more usaeable at all.

Dashy works (using and rebuilding) like a charm on a Synology NAS, on ProxMox with container on local-lvm storage, with exact same image version. Problems only do apear as I have the lxc (Linux container where docker is running) on a cluster wide ZFS storage (shared between all the ProxMox nodes)

I am able to reproduce this error at 100% for every time I do recreate working Dashy and rebuild!

As soon as I move the docker container to local storage (local-lvm) on the ProxMox cluster, no problem at all.

Workaround if someone encounters same problem;
Move the lxc container disk to local storage (e.g. local-lvm), make all the changes you need, redeploy a new Dashy image and rebuild Dashy. As Dashy is up and running again (Healthy status in portainer) rebuild and validate that actual set-up in *.yml file(s) does match your needs. As this is OK, move back lxc container to cluster wide ZFS storage to have ProxMox cluster redundancy for dashy. (Not very userfriendly and quite time consuming) but works

Additional info

dashyRebuild-BuildOperationFailed
_Dashy-it-4-Home_logs.txt

Please tick the boxes

@LuxBibi LuxBibi added the 🐛 Bug [ISSUE] Ticket describing something that isn't working label Oct 1, 2023
@liss-bot
Copy link
Collaborator

liss-bot commented Oct 1, 2023

If you're enjoying Dashy, consider dropping us a ⭐
🤖 I'm a bot, and this message was automated

@LuxBibi
Copy link
Author

LuxBibi commented Oct 1, 2023

Just to mention, that Dashy is for me by far the best Dashboard solution to meet my needs. Mainly because of the multipage an easy search functionality. Thanks Liss ;-)

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 1, 2023
@LuxBibi LuxBibi changed the title [BUG] Rebuild of Dashy in ProxMox v 8.0.4/docker in LinuxTurnkey) with ZFS storage not possible [BUG] Rebuild of Dashy in ProxMox v 8.0.4 / LinuxTurnkey / docker with ZFS storage is crashing. Oct 1, 2023
@LuxBibi LuxBibi changed the title [BUG] Rebuild of Dashy in ProxMox v 8.0.4 / LinuxTurnkey / docker with ZFS storage is crashing. [BUG] Rebuild of Dashy in ProxMox cluster v 8.0.4 / LinuxTurnkey / docker with ZFS storage is crashing. Oct 1, 2023
@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 2, 2023
@CrazyWolf13
Copy link
Collaborator

Just to mention, that Dashy is for me by far the best Dashboard solution to meet my needs. Mainly because of the multipage an easy search functionality. Thanks Liss ;-)

Hi can you share your Dashy Logs?
And your prxomox lxc container config?
Located int /etc/pve/lxc/.conf (on the proxmost host)

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 19, 2023
@github-project-automation github-project-automation bot moved this to Todo in Dashy V3 Oct 19, 2023
@LuxBibi
Copy link
Author

LuxBibi commented Oct 20, 2023

Hi,
Here the 600.conf (prio-2 docker running CT=600 = dashy)

arch: amd64
cores: 2
features: keyctl=1,nesting=1
hostname: dockerPrio2Services
memory: 2048
net0: name=eth0,bridge=vmbr0,gw=192.168.178.1,hwaddr=3E:CB:DD:D1:4C:DB,ip=192.168.178.202/24,type=veth
ostype: debian
rootfs: cluZFS-1:subvol-600-disk-0,size=8G
swap: 512
tags: docker;ha
unprivileged: 1

I've attached the dashy logs downloaded from portainer. Let me know, if you need another log file, as I am not able to shell in the dashy container. (Looked @ Google some months ago, this is normal behavior)

Steps done for your log files:

  • Stopped container in portainer

  • restarted dashy container

  • checked log file to be "normal"

    See file: "2023-10-20_Dashy-it-4-Home_logs after RESTART.txt"
    2023-10-20_Dashy-it-4-Home_logs after RESTART.txt

  • tested dashy. Worked as expected

  • Rebuild of dashy without changing any *.yml file. "Just" started a rebuild;

    1. "Config"
    2. "Update Configuration"
    3. "Rebuild Application"
    4. "Start Build"
  • dashy error while rebuilding. dashy dashboard not displayed anymore (Chrome refresh)

image

image

  • downloaded log file of dashy-container via portainer

See file: "2023-10-20 [email protected] build NO CONFIG FILE CHANGE = BAD.txt"
2023-10-20 [email protected] build NO CONFIG FILE CHANGE = BAD.txt

Attached the log file of same container (restored in ProxMox), as soon as I move this container to a local-lvm storage, and doing a rebuild.
[email protected] build NO CONFIG FILE CHANGE = OK.txt

Let me know if I can help to solve issue by providing more information.

Thanks so far,
Luc

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 20, 2023
@CrazyWolf13
Copy link
Collaborator

CrazyWolf13 commented Oct 20, 2023 via email

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 20, 2023
@LuxBibi
Copy link
Author

LuxBibi commented Oct 21, 2023

Hi CrazyWolf13 ;-)
Hi all,

Took some time, as I wanted to make some tests by moving my ProxMox from ZFS to ceph ...

So before digging in in the responses I was asked for: ceph does not give any problem whilst rebuilding the config in Dashy ! I will switch back to ZFS and redoo tomorrow latest the same rebuild process, and see if same problem does still apply. I'll post my results.

Here the answers to the questions.
/home/home is because I ma running Dashy with multiple pages ... I admit, that the name of my first page does not help to understand this url ;-) .. But this is OK for me .. so for the other pages I have /home/it-4-home ...

Here the conf.yml file ...

pageInfo:
title: '| it-4-Home |'
description: Access all the systems/services hosted @ | it-4-Home |
footerText: ''
pages:

  • name: Home
    path: conf.yml
  • name: it-4-Home
    path: it-4-Home.yml
  • name: SmartHome
    path: SmartHome.yml
  • name: Networking
    path: Networking.yml
  • name: Monitoring
    path: Monitoring.yml
  • name: Music and Videos
    path: MusicAndVideos.yml
  • name: Documentation
    path: Documentation.yml
    sections:

Just my point of view ... (I am not a developer at all ;-) ) : I presume that this bug is more related to how Dashy is accessing files with different filesystems. ZFS versus local-lvm vs ceph ... this because other containers behave without no problem till now, whilst also running on any of these filesystems, but mainly on ZFS.

The only way to understand what is happening will be to have on your side someone debugging step by step the rebuild process on ZFS. Especially as this does happen on every rebuild. Be it with a minimalistic *.yml file, or with my more complex multi page *yml set-up.

Developer console in the next post, as I have switched back to ZFS ..

Thanks so far
Luc

@LuxBibi
Copy link
Author

LuxBibi commented Oct 21, 2023

Hi guys,

I switched back to ZFS .. and problem reoccurs exactly the same way, as before .. So local-lvm and ceph is working ... Local-lvm is not an optipon, as I do run a ProxMox-Cluster .. ceph is for this configuration consuming too much ressources .. So ZFS is the only possible option for me ..

Hope you'll find where this problem comes from ...

Hereafter the Chrome COnsole while refreshing the page ..

image

Thanks so far ..

Luc

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 22, 2023
@CrazyWolf13
Copy link
Collaborator

Hi guys,

I switched back to ZFS .. and problem reoccurs exactly the same way, as before .. So local-lvm and ceph is working ... Local-lvm is not an optipon, as I do run a ProxMox-Cluster .. ceph is for this configuration consuming too much ressources .. So ZFS is the only possible option for me ..

Hope you'll find where this problem comes from ...

Hereafter the Chrome COnsole while refreshing the page ..

image

Thanks so far ..

Luc

Please look again through the guide I sent you and send me the output of the Browser Console.

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 22, 2023
@LuxBibi
Copy link
Author

LuxBibi commented Oct 24, 2023

Hallo ,

Was not at Home for some days ... Therefore some delay ..

Hereafter the asked information. I sent last time the Network part, as the Console view showed only few lines ..

This time I activated in Menu "All the levels", the unselected "Verbose" mode .. which finally showed more information. Screenshot and Logfiles (preserved mode) are attached.

First Console file:

  • Navigated to my Dashy page.
  • Rebuild of Dashy with "Build operation failed"

image
dashy.local-1698136920861.log

Second Console file:

  • Start of new Google Chrome (Microsoft Edge exact same behaving)
  • Activated the Console (Checked Verbose is still activated)
  • Navigated to my Dashy

This time the Dashy page apeared showing all the different steps as he was trying to build his environment. "Initializing" / "Running Checks" / "Building" / "Finishing Off" / "Almost Done" / "Not Long Left" / "Taking Longer than Expected"
2023-10-24_002 PicPick

and ended up in the Dashy Error Page ... with nothing in the console displayed. I've tested also in Edge, but exact same behaving.
image

Just as some information which might be important for you.
I stopped my Docker lxc on ProxMox, moved the root-disk to "llocal-lvm" storage, restarted the lxc and navigated to my Dashy page.
Exact same sequence of Dashy startup as described hereabove after the rebuild. This ended up on the same error. So only way to get my config running again, is to redeploy a new clean Dashy container via Portainer. As all the pages are after this rebuild showing the exact same layout! I had to rebuild the Dashy environment via the Dash rebuild whilst on local-lvm storage. Worked like a charm.
image

Log file of succesfull rebuild attached:
dashyRebuildOnLocal-Lvm OK.txt

Moving it afterwards to ZFS gives a running Dashy container on ZFS.

Let me know if you do need more ..

Luc

@LuxBibi
Copy link
Author

LuxBibi commented Oct 24, 2023

One thought ... but to be validated, if this can help on your side to narrow/pinpoint the issue ...

I can export my container after I've regenerated Dashy on ZFS .. so you can import on your side, and should be able to "see" what exactly is the problem, and myabe have a hint where this might come from .. ?

Just let me know ...

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Oct 25, 2023
@LuxBibi
Copy link
Author

LuxBibi commented Nov 15, 2023

Hi all,

Just to see, if this issue is still open, and if someone needs more information? I am there to help to get rid of this .. ;-)

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Nov 15, 2023
@LuxBibi
Copy link
Author

LuxBibi commented Nov 22, 2023

Hi @CrazyWolf13

I have moved the dashy docker to be stored on local-lvm, knowing, that this will disable in my ProxMox cluster the availability of dashy in case of failure of the docker-HOST, or the ProxMox host.

I therefore removed my dashy docker storage to cluZFS, and tried to apply the modifications I needed to do to the dashy files.

Seems, that this time an error Message was displayed at the end of the log fil in docker, which did not apear in my previous faulty rebuilds.

Hope this helps to make dashy cluZFS compatible .. ;-)

Available to help, in this tricky issue.

Luc

docker-dashy-LOG-file_

██████╗ █████╗ ███████╗██╗ ██╗██╗ ██╗
██╔══██╗██╔══██╗██╔════╝██║ ██║╚██╗ ██╔╝
██║ ██║███████║███████╗███████║ ╚████╔╝
██║ ██║██╔══██║╚════██║██╔══██║ ╚██╔╝
██████╔╝██║ ██║███████║██║ ██║ ██║
╚═════╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═╝


Welcome to Dashy! 🚀
Your new dashboard is now up and running with Docker


Using Dashy V-2.1.1. Update Check Complete
✅ Dashy is Up-to-Date

[email protected] build
vue-cli-service build
Error: ENOENT: no such file or directory, stat '/app/dist/index.html'
Error: ENOENT: no such file or directory, stat '/app/dist/index.html'

@CrazyWolf13
Copy link
Collaborator

I sadly cannot help you any further with this, the best bet is to hope for Lissy to look into this, however as she is really busy and there are way more important things heavily needing her attention. So I guess right now it's a bit out of the scope, however this is only my opinion and maybe lissy looks into this:)

@LuxBibi
Copy link
Author

LuxBibi commented Nov 22, 2023

Thanks @CrazyWolf13 for your fast reply.

Ok, I will keep an eye on this issue, so I can maybe help with some actions on my side, as @Lissy93 has some time to take a look at this.

Alm is so far documented in this ticket. I would be happy to help you guys, if any information is needed.

Thx so far. Thumbs up for this beautiful dashboard. Really great 🤩

Danke Tobias

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Nov 23, 2023
@LuxBibi
Copy link
Author

LuxBibi commented Jan 9, 2024

Files are too huge to uplload! Smalest file is 491 MB .. Max is 509 MB. Max upload on GitHub is 25 MB.

Let me know if we can find another service to transfer these files, like WeTransfer. But then I need an e-mail where I can send the link. :-(

Luc

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Jan 10, 2024
@LuxBibi
Copy link
Author

LuxBibi commented Jan 11, 2024

Hi,

I do recontact you, just to see if you have an idea, how to send you the files ?

I will be available till Friday evening ;-) ... Will be back on 22nd January ..

I adapt to your needs ..

Thanks for the support so far.

Luc

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Jan 11, 2024
@TheRealGramdalf
Copy link

Apologies, I was busy the past few days. Feel free to upload them to something like pixeldrain, and I'll take a look from there.

Also about the rebuild thing - this is a bug introduced in 2.1.1 that causes the application to not be rebuilt on startup (which it did in 2.1.0). See #1290 (comment) for a full explanation.

@LuxBibi
Copy link
Author

LuxBibi commented Jan 11, 2024

Hi ...

No problem at all ;-) .... I am happy, that you do take your time to look at this problem. Especially, as I do not know, if it is related to Dashy, or even a third party coincidence .. Hope you'l find it out ;-)

Amazing tool .. Better than WeTransfer, as no e-mail needed ;-) .. Perfect ..

So here we are .. Here is the link: 2024-01-11-dashyDockerExports pointing to all 3 files as described in my previous e-mail ...

Thanks so far for your help ;-)

Luc

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Jan 12, 2024
@JPDucky
Copy link

JPDucky commented Jan 13, 2024

This may or may not work, but have you tried creating another ZFS instance to test to see if it's occurring there as well? I have had some whacky issues with permissions on my ZFS proxmox cluster before

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Jan 13, 2024
@LuxBibi
Copy link
Author

LuxBibi commented Jan 14, 2024 via email

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Jan 15, 2024
@LuxBibi
Copy link
Author

LuxBibi commented Jan 22, 2024

Hi all,

Just wanted to let you know, that I am back from holiday .. ;-) Ready to help, if needed. No hurry, as I know you have for sure other topics also to handle. Wanted to just to let you know, that I am am able as of now, to give further details, you may need ..

Your speed will be mine.

Thanks for your help so far.

Luc

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Jan 22, 2024
@TheRealGramdalf
Copy link

Allright, so I've finally got a chance to look at this a little bit.

I used Meld to take a look at the differences between the three; here are the main highlights:

  • The first export (0-local-lvm) is missing /app/dist/*.yml files (your dashboard configuration)
  • The second export (as expected) seems to have everything in order
  • The last export (2-zfs) is completely missing /app/dist/* - the directory is completely empty

The implications:

0-local-lvm
  • The configuration files are inside the container, and thus should be accessible to the rebuilder. This is connected to the issue regarding 2.1.1/2.1.0 - in 2.1.1, the configuration files aren't copied from /app/public to /app/dist. This is also the case with my own container - /app/dist/conf.yml is missing after I recreate the container.
1-local-lvm
  • Working as expected, mostly used as a reference point
2-zfs
  • There is nothing in /app/dist, which causes lots of issues. I'm uncertain exactly what the cause is, but it is distinctly different than 0-local-lvm - that was simply an issue with configuration files, but in ZFS the directory is empty. My best guess would be that there is either a bind mount or volume covering /app/dist, but that shouldn't be the case according to the compose file.
  • My next best guess is that when a rebuild is triggered, yarn removes everything in /app/dist as the first step (I confirmed this to be the case) - but something is preventing the rebuild, so nothing new is written.
What to do

Unfortunately it's quite hard to diagnose things remotely like this, but my current leading theory is that it's to do with the docker image being stored originally on LVM, but then migrated to ZFS - try deleting the docker image from your LXC completely (I believe you can do it through the portainer webUI) so it gets re-downloaded when you restart the container.

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Jan 25, 2024
@LuxBibi
Copy link
Author

LuxBibi commented Jan 25, 2024

Hi,

Thanks for this detailled feedback. I know, that debugging this issue is quite complex if you do not have the necessary infrastructure. Sorry for that. Thanks for all your help to find what is not behaving as expected ;-) as this will help us all.

I was according to your feedback deleteing all the dashy related items on docker (via Portainer). This includes;

  • Image
  • Docker container

Docker compose file and dashy *.yml files not changed since +/- a month.
LXC container is running on ZFS-Storage also for +/- a month.

I repulled the image from Docker-Hub (latest which is TAG: 2.1.1 via the docker compose file and ended up in the Dashy container running (Healthy Status displayed in Portainer), but as usual with the same menu items displayed in all dashy-Menus. (This behavior is since the very beginning. No difference in lvm or ZFS or ceph. Nor really anyoing. OK for me, as I know I just need to rebuild to have entries showing up)

I then did a rebuild, to have these different Pages/Menus recreated, while remaining on ZFS.

Same error occured, as on the very beginning.

image

BTW: I created a ceph-storage on the same ProxMox cluster and Dashy does behave exactly the same way on ceph-storage, than on ZFS-Storage. :-(

I remain available for any information you may need.

P.S.: Image I used while creating this bug (TAG: 2.2.1) is the same as the image I just pulled (TAG: 2.1.1) Did I miss some "newer" not yet published image you want me to test?

@liss-bot liss-bot added 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending and removed 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending labels Jan 25, 2024
@LuxBibi
Copy link
Author

LuxBibi commented Apr 25, 2024

Hi @TheRealGramdalf ,
Hi all,

For every new release you generate, I test your new version on my ProxMox Cluster, to see if the error I do encounter, may have disapeared.

  • Config related to ProxMox is the same, despite the "normal" system updates and your latest Dashy - V-3.0.0. release.

All the other prior releases did not change anything on my problem. But, this release does react differently.

I hope that this may help you to pinpoint what the problem is. It is also intended to help other users now facing this new situation!

I do post my log fiels hereafter.


Details:

I was running Dashy 2.1.2 prior to the update to be on my ZFS filesystem. Just to remember. Dashy on ProxMox with ZFS-filesystem does enup in an erro while regenerating his pages/engine as any modifications was done in any *.yml (config) file(s)

New behavior; (Same LXC container as with prior release)

  • Dashy container is running on ZFS
  • I regenerated the Dashy docker container by pulling the Ver 3.0.0 image (Stack update and repull image)
    • same yml-config files as with prior Ver 2.1.2
    • adapted volume references as docker compose to point to new directory as indicated in your documentation </app/user-data/conf.yml>
    • doubled the LXC memory from 2048 to 4096 MB

Dashy container does not start anymore. (Constantly rebooting due to Restart Policy: "Unless Stopped")

LOG results while being on ZFS [Dashy crashing]

`$ node server
Checking config file against schema...
✔️ Config file is valid, no issues found
SSL Not Enabled: Public key not present
██████╗ █████╗ ███████╗██╗ ██╗██╗ ██╗
██╔══██╗██╔══██╗██╔════╝██║ ██║╚██╗ ██╔╝
██║ ██║███████║███████╗███████║ ╚████╔╝
██║ ██║██╔══██║╚════██║██╔══██║ ╚██╔╝
██████╔╝██║ ██║███████║██║ ██║ ██║
╚═════╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═╝


Welcome to Dashy! 🚀
Your new dashboard is now up and running with Docker


Using Dashy V-3.0.0. Update Check Complete
✅ Dashy is Up-to-Date

  • Building for production...
    WARN A new version of sass-loader is available. Please upgrade for best experience.
    ERROR Error: EINVAL: invalid argument, rmdir '/app/dist/css'
    Error: EINVAL: invalid argument, rmdir '/app/dist/css'
    error Command failed with exit code 1.
    info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
    ERROR: "build" exited with 1.
    error Command failed with exit code 1.
    info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
    yarn run v1.22.19
    $ NODE_OPTIONS=--openssl-legacy-provider npm-run-all --parallel build start
    $ NODE_OPTIONS=--openssl-legacy-provider vue-cli-service build
    $ node server
    Checking config file against schema...
    ✔️ Config file is valid, no issues found
    SSL Not Enabled: Public key not present
    ██████╗ █████╗ ███████╗██╗ ██╗██╗ ██╗
    ██╔══██╗██╔══██╗██╔════╝██║ ██║╚██╗ ██╔╝
    ██║ ██║███████║███████╗███████║ ╚████╔╝
    ██║ ██║██╔══██║╚════██║██╔══██║ ╚██╔╝
    ██████╔╝██║ ██║███████║██║ ██║ ██║
    ╚═════╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═╝

Welcome to Dashy! 🚀
Your new dashboard is now up and running with Docker


Using Dashy V-3.0.0. Update Check Complete
✅ Dashy is Up-to-Date

  • Building for production...
    WARN A new version of sass-loader is available. Please upgrade for best experience.
    ERROR Error: EINVAL: invalid argument, rmdir '/app/dist/css'
    Error: EINVAL: invalid argument, rmdir '/app/dist/css'
    error Command failed with exit code 1.
    info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
    ERROR: "build" exited with 1.
    error Command failed with exit code 1.
    info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
    yarn run v1.22.19
    $ NODE_OPTIONS=--openssl-legacy-provider npm-run-all --parallel build start
    $ NODE_OPTIONS=--openssl-legacy-provider vue-cli-service build
    $ node server
    Checking config file against schema...
    ✔️ Config file is valid, no issues found
    SSL Not Enabled: Public key not present
    ██████╗ █████╗ ███████╗██╗ ██╗██╗ ██╗
    ██╔══██╗██╔══██╗██╔════╝██║ ██║╚██╗ ██╔╝
    ██║ ██║███████║███████╗███████║ ╚████╔╝
    ██║ ██║██╔══██║╚════██║██╔══██║ ╚██╔╝
    ██████╔╝██║ ██║███████║██║ ██║ ██║
    ╚═════╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═╝`

After stopping the LXC container, and moving the storage to local-lvm, Dashy was able to start, and generate all the pages of my configuration, and it is working like a charm.

Log file running LXC on local-lvm storage [ALL OK]

Using Dashy V-3.0.0. Update Check Complete
✅ Dashy is Up-to-Date
yarn run v1.22.19
$ NODE_OPTIONS=--openssl-legacy-provider npm-run-all --parallel build start
$ NODE_OPTIONS=--openssl-legacy-provider vue-cli-service build
$ node server
Checking config file against schema...
✔️ Config file is valid, no issues found
SSL Not Enabled: Public key not present
██████╗ █████╗ ███████╗██╗ ██╗██╗ ██╗
██╔══██╗██╔══██╗██╔════╝██║ ██║╚██╗ ██╔╝
██║ ██║███████║███████╗███████║ ╚████╔╝
██║ ██║██╔══██║╚════██║██╔══██║ ╚██╔╝
██████╔╝██║ ██║███████║██║ ██║ ██║
╚═════╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═╝


Welcome to Dashy! 🚀
Your new dashboard is now up and running with Docker


Using Dashy V-3.0.0. Update Check Complete
✅ Dashy is Up-to-Date

  • Building for production...
    WARN A new version of sass-loader is available. Please upgrade for best experience.
    Error: ENOENT: no such file or directory, stat '/app/dist/index.html'
    DONE Compiled successfully in 408539ms10:33:05 PM
    File Size Gzipped
    dist/js/chunk-vendors.92e65062.js 6358.33 KiB 2293.83 KiB
    dist/js/dashy.43c6539a.js 768.52 KiB 231.87 KiB
    dist/js/chunk-4cfc5864.4aba8cfa.js 250.36 KiB 74.98 KiB
    dist/js/chunk-50f31ec3.96f6cd6b.js 79.45 KiB 19.64 KiB
    dist/precache-manifest.57f4a61195ab4a7 19.40 KiB 4.47 KiB
    836b384a95ae3fa9e.js
    dist/js/chunk-180be55e.f947613e.js 15.94 KiB 5.30 KiB
    dist/js/chunk-468d3a74.a5fc4bfa.js 15.58 KiB 4.49 KiB
    dist/js/chunk-03c5a0ba.399ef71d.js 15.38 KiB 5.04 KiB
    dist/js/chunk-16e26d5d.ee5b2c63.js 14.94 KiB 4.49 KiB
    dist/js/chunk-2642eaf9.4eed1878.js 14.39 KiB 4.88 KiB
    dist/js/chunk-0367deae.f8ce8511.js 13.15 KiB 4.52 KiB
    dist/js/chunk-7bba3126.e4385130.js 12.26 KiB 4.26 KiB
    dist/js/chunk-08ca355a.26f829d9.js 11.12 KiB 4.12 KiB
    dist/js/chunk-38169201.f89dfac1.js 11.10 KiB 4.44 KiB
    dist/js/chunk-460e6092.c50e1a0e.js 11.02 KiB 4.07 KiB
    dist/js/chunk-445cc501.4e92c9f7.js 10.89 KiB 4.09 KiB
    dist/js/chunk-187213fc.f70a23c4.js 10.80 KiB 4.03 KiB
    dist/js/chunk-edbdb67c.2a7383c8.js 9.28 KiB 3.32 KiB
    dist/js/chunk-2925d418.d00939dd.js 9.04 KiB 3.08 KiB
    dist/js/chunk-93c6be8c.e412d699.js 8.25 KiB 3.10 KiB
    dist/js/chunk-c8bd4cd0.da35f1b3.js 8.07 KiB 2.99 KiB
    dist/js/chunk-0248a1e9.d8ae36a6.js 8.03 KiB 2.86 KiB
    dist/js/chunk-7ba8e45c.31261b47.js 7.82 KiB 2.96 KiB
    dist/js/chunk-49f2d909.cbc33fa0.js 7.73 KiB 2.87 KiB
    dist/js/chunk-7e15df28.012f53d9.js 7.61 KiB 2.66 KiB
    dist/js/chunk-070d32ac.458ca0d0.js 7.35 KiB 2.87 KiB
    dist/js/chunk-0894290e.7f4f8889.js 7.11 KiB 2.69 KiB
    dist/js/chunk-e77c83e6.48594eb8.js 7.05 KiB 2.65 KiB
    dist/js/chunk-92c623f0.7b960119.js 7.02 KiB 2.81 KiB
    dist/js/chunk-fc7a7722.2250ffe1.js 6.96 KiB 2.59 KiB
    dist/js/chunk-0c7116ec.15f56f4d.js 6.88 KiB 2.76 KiB
    dist/js/chunk-29548417.1b7f8472.js 6.56 KiB 2.59 KiB
    dist/js/chunk-26dbf0a4.f1a7a0e0.js 6.25 KiB 2.52 KiB
    dist/js/chunk-1b35c628.bfc608ce.js 6.20 KiB 2.50 KiB
    dist/js/chunk-88331f84.e7726ecb.js 6.16 KiB 2.53 KiB
    dist/js/chunk-8db027b8.4a5649f0.js 6.15 KiB 2.44 KiB
    dist/js/chunk-15b37c0a.fecf5ad8.js 6.10 KiB 2.45 KiB
    dist/js/chunk-b7e4a5ce.c1e5d13d.js 6.09 KiB 2.57 KiB
    dist/js/chunk-f05c978e.1921df2e.js 6.07 KiB 2.40 KiB
    dist/js/chunk-7abb8001.1fb01462.js 6.07 KiB 2.26 KiB
    dist/js/chunk-32eb6af1.d683115c.js 6.02 KiB 2.55 KiB
    dist/js/chunk-04659cb4.924484e8.js 5.99 KiB 2.39 KiB
    dist/js/chunk-44cb61f1.0013706e.js 5.97 KiB 2.08 KiB
    dist/js/chunk-4073bae0.51329ecf.js 5.93 KiB 1.91 KiB
    dist/js/chunk-11e20f6f.68f8e974.js 5.88 KiB 2.19 KiB
    dist/js/chunk-08fae180.5daf04b7.js 5.85 KiB 2.41 KiB
    dist/js/chunk-4ab61964.aa761fe6.js 5.83 KiB 2.44 KiB
    dist/js/chunk-b52460ac.ae1e2d77.js 5.74 KiB 2.37 KiB
    dist/js/chunk-cd40f4ae.ed73fab0.js 5.69 KiB 2.34 KiB
    dist/js/chunk-ecec4fc4.dedf003b.js 5.68 KiB 2.44 KiB
    dist/js/chunk-4ef6dcf5.778ab1cf.js 5.67 KiB 2.36 KiB
    dist/js/chunk-7c4d77dc.c25c13cb.js 5.40 KiB 2.26 KiB
    dist/js/chunk-bd9012c4.e4b54229.js 5.38 KiB 2.32 KiB
    dist/js/chunk-21680640.a656bd80.js 5.35 KiB 2.22 KiB
    dist/js/chunk-043d9c91.c014b344.js 5.29 KiB 2.17 KiB
    dist/js/chunk-f539423c.7c46861d.js 5.26 KiB 2.15 KiB
    dist/js/chunk-6b5de1e1.c41bee96.js 5.24 KiB 2.10 KiB
    dist/js/chunk-4f2c58c5.f336971c.js 5.03 KiB 2.00 KiB
    dist/js/chunk-736b2ef0.1b65ea1f.js 5.03 KiB 2.06 KiB
    dist/js/chunk-f38e0ad2.7a41f73d.js 5.01 KiB 2.13 KiB
    dist/js/chunk-3a3d0cd8.81d833a5.js 4.99 KiB 2.15 KiB
    dist/js/chunk-674ac328.b3d128f3.js 4.99 KiB 2.22 KiB
    dist/js/chunk-cee89fa8.0d3bc86e.js 4.92 KiB 2.12 KiB
    dist/js/chunk-677c8830.d65a072e.js 4.85 KiB 2.06 KiB
    dist/js/chunk-0633ac20.674a3d69.js 4.81 KiB 2.04 KiB
    dist/js/chunk-6a170920.666bec94.js 4.69 KiB 2.06 KiB
    dist/js/chunk-c02e690a.ea23b03d.js 4.51 KiB 1.92 KiB
    dist/js/chunk-b25c821e.8af52b87.js 4.33 KiB 1.91 KiB
    dist/js/chunk-781da5fb.be037a62.js 4.32 KiB 1.45 KiB
    dist/js/chunk-b54d81ae.b4cb65ce.js 4.31 KiB 1.87 KiB
    dist/js/chunk-aa9cebcc.2f578d67.js 4.10 KiB 1.60 KiB
    dist/js/chunk-665a1900.a7d21b0f.js 3.69 KiB 1.69 KiB
    dist/js/chunk-14192a80.056262ad.js 3.64 KiB 1.51 KiB
    dist/js/chunk-1e169674.7a54b293.js 2.88 KiB 1.36 KiB
    dist/js/chunk-6ab1f28d.9a3be93f.js 2.83 KiB 1.33 KiB
    dist/js/chunk-72e3b16c.0f88bca5.js 2.80 KiB 1.30 KiB
    dist/js/chunk-75cc9f4d.a5021e27.js 2.74 KiB 1.30 KiB
    dist/js/chunk-0387fd77.c052ffac.js 2.74 KiB 1.27 KiB
    dist/js/chunk-0044633e.3dc0bad5.js 2.40 KiB 1.11 KiB
    dist/js/chunk-284f6914.401e1214.js 2.32 KiB 1.10 KiB
    dist/js/chunk-73f090a0.0e3ec0c9.js 2.31 KiB 1.12 KiB
    dist/js/chunk-0c51289a.eac23d06.js 2.27 KiB 1.06 KiB
    dist/js/chunk-7132ce43.f79ba314.js 2.22 KiB 1.08 KiB
    dist/js/chunk-2d225b78.80adc7b1.js 2.05 KiB 1.09 KiB
    dist/js/chunk-2ab49ff8.5963cee6.js 1.91 KiB 0.99 KiB
    dist/js/chunk-c0f28fc6.2dbaa6ba.js 1.91 KiB 0.94 KiB
    dist/js/chunk-d42744f4.6acd67c8.js 1.90 KiB 0.98 KiB
    dist/js/chunk-7795c4fe.770bf2c1.js 1.04 KiB 0.57 KiB
    dist/service-worker.js 1.04 KiB 0.61 KiB
    dist/js/chunk-3767f013.3b314b6a.js 0.75 KiB 0.45 KiB
    dist/css/dashy.899627eb.css 268.45 KiB 32.60 KiB
    dist/css/chunk-fc7a7722.f1790b34.css 11.54 KiB 1.75 KiB
    dist/css/chunk-03c5a0ba.fdf5ccee.css 9.49 KiB 1.81 KiB
    dist/css/chunk-0248a1e9.2af758e1.css 7.32 KiB 1.32 KiB
    dist/css/chunk-4073bae0.262be67e.css 5.86 KiB 1.00 KiB
    dist/css/chunk-0c7116ec.8d663b8e.css 3.98 KiB 0.96 KiB
    dist/css/chunk-29548417.1e586604.css 3.78 KiB 0.69 KiB
    dist/css/chunk-7795c4fe.8e5b7c8e.css 3.78 KiB 0.92 KiB
    dist/css/chunk-2642eaf9.103376cf.css 3.54 KiB 0.86 KiB
    dist/css/chunk-93c6be8c.b621be85.css 3.53 KiB 0.88 KiB
    dist/css/chunk-2925d418.2f4219ad.css 3.48 KiB 0.77 KiB
    dist/css/chunk-26dbf0a4.3f521e8a.css 3.31 KiB 0.86 KiB
    dist/css/chunk-c8bd4cd0.25b1ca48.css 3.28 KiB 0.83 KiB
    dist/css/chunk-vendors.d8067ad8.css 2.74 KiB 0.83 KiB
    dist/css/chunk-f05c978e.04b75e3f.css 2.67 KiB 0.59 KiB
    dist/css/chunk-7e15df28.208bbeec.css 2.51 KiB 0.64 KiB
    dist/css/chunk-0367deae.0f98d711.css 2.48 KiB 0.67 KiB
    dist/css/chunk-4cfc5864.9357c852.css 2.48 KiB 0.57 KiB
    dist/css/chunk-14192a80.31a5db2c.css 2.38 KiB 0.58 KiB
    dist/css/chunk-49f2d909.26592934.css 2.38 KiB 0.57 KiB
    dist/css/chunk-7ba8e45c.17242d8b.css 2.30 KiB 0.55 KiB
    dist/css/chunk-7bba3126.b97a92c1.css 2.17 KiB 0.56 KiB
    dist/css/chunk-7c4d77dc.8c1925ff.css 2.06 KiB 0.49 KiB
    dist/css/chunk-781da5fb.38b3bad4.css 2.03 KiB 0.56 KiB
    dist/css/chunk-8db027b8.377fb75a.css 2.01 KiB 0.57 KiB
    dist/css/chunk-edbdb67c.0de3bd5e.css 1.94 KiB 0.56 KiB
    dist/css/chunk-e77c83e6.729d6dc8.css 1.93 KiB 0.55 KiB
    dist/loading-screen.css 1.93 KiB 0.65 KiB
    dist/css/chunk-04659cb4.f809b0eb.css 1.86 KiB 0.50 KiB
    dist/css/chunk-08ca355a.0e2f8538.css 1.85 KiB 0.55 KiB
    dist/css/chunk-460e6092.0bcf49d9.css 1.85 KiB 0.56 KiB
    dist/css/chunk-88331f84.b825db4a.css 1.81 KiB 0.55 KiB
    dist/css/chunk-070d32ac.3ca152a5.css 1.80 KiB 0.51 KiB
    dist/css/chunk-445cc501.d9af4531.css 1.79 KiB 0.53 KiB
    dist/css/chunk-187213fc.851bbb61.css 1.78 KiB 0.52 KiB
    dist/css/chunk-180be55e.2679cb7e.css 1.77 KiB 0.53 KiB
    dist/css/chunk-6ab1f28d.dcd44809.css 1.65 KiB 0.42 KiB
    dist/css/chunk-21680640.f72d1c0d.css 1.64 KiB 0.51 KiB
    dist/css/chunk-32eb6af1.b73f2acc.css 1.59 KiB 0.48 KiB
    dist/css/chunk-4ab61964.950bd772.css 1.57 KiB 0.49 KiB
    dist/css/chunk-92c623f0.7601575f.css 1.55 KiB 0.47 KiB
    dist/css/chunk-b52460ac.d91d8d0b.css 1.49 KiB 0.47 KiB
    dist/css/chunk-4f2c58c5.e91567b0.css 1.38 KiB 0.43 KiB
    dist/css/chunk-aa9cebcc.43dd3768.css 1.36 KiB 0.41 KiB
    dist/css/chunk-7abb8001.d5057fa6.css 1.30 KiB 0.44 KiB
    dist/css/chunk-468d3a74.e7e4907a.css 1.26 KiB 0.42 KiB
    dist/css/chunk-38169201.87f602e2.css 1.16 KiB 0.46 KiB
    dist/css/chunk-15b37c0a.ebae7724.css 1.15 KiB 0.36 KiB
    dist/css/chunk-16e26d5d.97cc876a.css 1.14 KiB 0.41 KiB
    dist/css/chunk-ecec4fc4.7db7f641.css 1.12 KiB 0.36 KiB
    dist/css/chunk-0633ac20.857ad57c.css 1.04 KiB 0.34 KiB
    dist/css/chunk-bd9012c4.bbf2305d.css 1.04 KiB 0.34 KiB
    dist/css/chunk-043d9c91.9438acdb.css 0.93 KiB 0.37 KiB
    dist/css/chunk-1b35c628.f7e5ac71.css 0.90 KiB 0.31 KiB
    dist/css/chunk-4ef6dcf5.f9dd4bd8.css 0.88 KiB 0.30 KiB
    dist/css/chunk-f539423c.4b2b2c2a.css 0.88 KiB 0.33 KiB
    dist/css/chunk-3a3d0cd8.5aaf7cba.css 0.88 KiB 0.33 KiB
    dist/css/chunk-11e20f6f.070a8cfa.css 0.87 KiB 0.35 KiB
    dist/css/chunk-b7e4a5ce.df4ad987.css 0.86 KiB 0.32 KiB
    dist/css/chunk-f38e0ad2.1ea48a31.css 0.84 KiB 0.28 KiB
    dist/css/chunk-0894290e.edb63a9d.css 0.79 KiB 0.32 KiB
    dist/css/chunk-0387fd77.7aa83618.css 0.75 KiB 0.28 KiB
    dist/css/chunk-6b5de1e1.9eb66c9f.css 0.71 KiB 0.31 KiB
    dist/css/chunk-44cb61f1.025edb8a.css 0.69 KiB 0.31 KiB
    dist/css/chunk-736b2ef0.98820bcd.css 0.60 KiB 0.33 KiB
    dist/css/chunk-677c8830.df6a5b00.css 0.59 KiB 0.23 KiB
    dist/css/chunk-284f6914.58ade778.css 0.46 KiB 0.24 KiB
    dist/css/chunk-08fae180.9b2da476.css 0.46 KiB 0.22 KiB
    dist/css/chunk-0c51289a.d6684378.css 0.38 KiB 0.17 KiB
    dist/css/chunk-1e169674.98a4aa99.css 0.36 KiB 0.16 KiB
    dist/css/chunk-75cc9f4d.98a4aa99.css 0.36 KiB 0.16 KiB
    dist/css/chunk-d42744f4.f1c873fc.css 0.36 KiB 0.16 KiB
    dist/css/chunk-674ac328.d604576c.css 0.36 KiB 0.19 KiB
    dist/css/chunk-2ab49ff8.2ca1d591.css 0.36 KiB 0.16 KiB
    dist/css/chunk-6a170920.3839d02e.css 0.31 KiB 0.20 KiB
    dist/css/chunk-c0f28fc6.b67ed63a.css 0.22 KiB 0.16 KiB
    dist/css/chunk-3767f013.c9ab3ab3.css 0.11 KiB 0.10 KiB
    dist/css/chunk-cee89fa8.0918bc41.css 0.08 KiB 0.10 KiB
    dist/css/chunk-b54d81ae.61a081a9.css 0.08 KiB 0.10 KiB
    dist/css/chunk-665a1900.eeb31e13.css 0.07 KiB 0.09 KiB
    dist/css/chunk-b25c821e.f58ec558.css 0.07 KiB 0.09 KiB
    dist/css/chunk-c02e690a.ccf83212.css 0.06 KiB 0.08 KiB
    dist/css/chunk-cd40f4ae.90cf07cd.css 0.06 KiB 0.07 KiB
    dist/css/chunk-0044633e.0e433876.css 0.00 KiB 0.02 KiB
    dist/css/chunk-7132ce43.0e433876.css 0.00 KiB 0.02 KiB
    dist/css/chunk-72e3b16c.0e433876.css 0.00 KiB 0.02 KiB
    dist/css/chunk-73f090a0.0e433876.css 0.00 KiB 0.02 KiB
    Images and other types of assets omitted.
    DONE Build complete. The dist directory is ready to be deployed.
    INFO Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html
    `

I now wanted to know, if switching back to ZFS will give a working Ver 3.0.0 Dashy. BUT after stopping the LXC container and moving back the storage to ZFS, this time, Dashy is behaving exactly as mentionned above (Restarting continously, and producing the exact same log-file entries)

Hope this long comment will help to understand why your dashy-docker does "not like" ZFS filesystem. I was not able to test CEPH, as I will be travelling the next weeks ...

If you do need more details, just ping me ... I will answer, as I am back .. (+/- last week May.

Thanks for your fabulous Dashy ;-) .... It is still the only solution matching my needs ;-)

Luc

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Apr 25, 2024
@TheRealGramdalf
Copy link

One question @LuxBibi - can you post the output of zpool get all | grep feature? I realized I never actually asked about your ZFS pool features - there were fixes in ZoL v2.2.0 regarding containerization, and I'm wondering if that hasn't been enabled.

The output of zfs version would also be helpful.

@LuxBibi
Copy link
Author

LuxBibi commented Apr 26, 2024

Hi @TheRealGramdalf ,

First, thanks for the fast reply ;-) ...

Here the asked details.

$ root@srvProxMox-2:~# zpool get all | grep feature

cluZFS-1 feature@async_destroy enabled local
cluZFS-1 feature@empty_bpobj active local
cluZFS-1 feature@lz4_compress active local
cluZFS-1 feature@multi_vdev_crash_dump enabled local
cluZFS-1 feature@spacemap_histogram active local
cluZFS-1 feature@enabled_txg active local
cluZFS-1 feature@hole_birth active local
cluZFS-1 feature@extensible_dataset active local
cluZFS-1 feature@embedded_data active local
cluZFS-1 feature@bookmarks enabled local
cluZFS-1 feature@filesystem_limits enabled local
cluZFS-1 feature@large_blocks enabled local
cluZFS-1 feature@large_dnode enabled local
cluZFS-1 feature@sha512 enabled local
cluZFS-1 feature@skein enabled local
cluZFS-1 feature@edonr enabled local
cluZFS-1 feature@userobj_accounting active local
cluZFS-1 feature@encryption enabled local
cluZFS-1 feature@project_quota active local
cluZFS-1 feature@device_removal enabled local
cluZFS-1 feature@obsolete_counts enabled local
cluZFS-1 feature@zpool_checkpoint enabled local
cluZFS-1 feature@spacemap_v2 active local
cluZFS-1 feature@allocation_classes enabled local
cluZFS-1 feature@resilver_defer enabled local
cluZFS-1 feature@bookmark_v2 enabled local
cluZFS-1 feature@redaction_bookmarks enabled local
cluZFS-1 feature@redacted_datasets enabled local
cluZFS-1 feature@bookmark_written enabled local
cluZFS-1 feature@log_spacemap active local
cluZFS-1 feature@livelist enabled local
cluZFS-1 feature@device_rebuild enabled local
cluZFS-1 feature@zstd_compress enabled local
cluZFS-1 feature@draid enabled local
root@srvProxMox-2:~#

root@srvProxMox-2:~# zfs version
zfs-2.2.3-pve1
zfs-kmod-2.1.13-pve1

Available if more needed,

Luc

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Apr 27, 2024
@TheRealGramdalf
Copy link

I believe the problem may be related to some bug fixes in ZFS version 2.2.0, primarily Linux Container support, which fixed some issues with overlay2 - which you are using as the storage driver according to #1337 (comment):

 Storage Driver: overlay2
  Backing Filesystem: zfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false

Based on the feature flags shown in zpool get all | grep feature, it looks like your zpool is still on a version earlier than 2.2.0 - or, at the very least, said zpool has not been zpool upgraded.

root@srvProxMox-2:~# zfs version zfs-2.2.3-pve1 zfs-kmod-2.1.13-pve1

This suggests to me that zfs-utils have been updated to 2.2.3, but that the kernel is still using the older 2.1.13 module - you may be able to hot swap this during runtime with modprobe, but the easiest way to do it would be to reboot the proxmox node entirely (My proxmox server which I recently updated to Proxmox 8.2 has zfs-2.2.3-pve2 zfs-kmod-2.2.3-pve2 after a reboot)

My recommended steps:

Warning

As always, take backups, and make sure you take proper precautions. The actions described below can be destructive, so follow with care.

  • Install the latest available proxmox updates, primarily all ZFS related ones
  • Reboot the node(s)
  • Double check that v2.2.x is loaded as indicated by zfs-kmod-*
  • Run zpool upgrade on the relevant pools
    • Note: There may be helpful information on how to upgrade your zpool in this thread. Keep in mind that these features are now available in the most current release, so you shouldn't need to enable any testing repositories - the thread is from the time when ZFS 2.2.0 was still an opt-in beta
  • (Optional, recommended): Stop all containers and perform docker system prune to remove all images, then start containers and re-pull images as needed (to re write the images to the zpool in case of residual issues)

I would try that first and see where it takes you, since up to zfs 2.2.0 overlay2 support has been dicey at best, and resulted in many weird issues.

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label Apr 29, 2024
@LuxBibi
Copy link
Author

LuxBibi commented Apr 30, 2024

Hi @TheRealGramdalf ,

I do really appreciate your help and effort, to try to get rid of this issue. Thanks a lot for this ;-). Especially, as these updates seem to be the solution to all the problems I have encountered so far with Dashy on my ProxMox CLuster !!!!

Rebuilding, changing *.yml files, migrating .. all worked now like a charm ... ;-)

I'll close this issue as Dashy is now running perfectly on these new ProxMox updates. Thanks for your ideas and hints.

As this maybe of interest for anyone encountering the same problem. Here the actual status (took some time, as I was not @ Home)

  • I tested on my ProxMox test env the update to the latest ProxMox 8.2.2 release without testing all the my LXC and docker containers. Just the "PRIO-1" services
  • As this worked without issue, I started migrating my 3 node ProxMox cluster to this same release. One node after the other.
    -- apt-update
    -- apt-upgrade
    -- apt dist-upgrade
    -- reboot
    -- waiting for all the lxc and docker containers belonging to this node to be migrated back

Did this for the other remaining nodes in my cluster, ending up with a 8.2.2. Proxmox cluster!

I have redeployed the Dashy Stack after removing the "old" Dashy image, and repulling the latest v.3.0.1 Dashy image.

'docker container log_file`
yarn run v1.22.19
$ node server
Checking config file against schema...
✔️ Config file is valid, no issues found
SSL Not Enabled: Public key not present
██████╗ █████╗ ███████╗██╗ ██╗██╗ ██╗
██╔══██╗██╔══██╗██╔════╝██║ ██║╚██╗ ██╔╝
██║ ██║███████║███████╗███████║ ╚████╔╝
██║ ██║██╔══██║╚════██║██╔══██║ ╚██╔╝
██████╔╝██║ ██║███████║██║ ██║ ██║
╚═════╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═╝


Welcome to Dashy! 🚀
Your new dashboard is now up and running with Docker


Using Dashy V-3.0.1. Update Check Complete
✅ Dashy is Up-to-Date

Hereafter the actual ZFS versions.

pveversion
pve-manager/8.2.2/9355359cd7afbae4 (running kernel: 6.8.4-2-pve)

zfs version
zfs-2.2.3-pve2
zfs-kmod-2.2.3-pve2

zpool get all | grep feature
cluZFS-1 feature@async_destroy enabled local
cluZFS-1 feature@empty_bpobj active local
cluZFS-1 feature@lz4_compress active local
cluZFS-1 feature@multi_vdev_crash_dump enabled local
cluZFS-1 feature@spacemap_histogram active local
cluZFS-1 feature@enabled_txg active local
cluZFS-1 feature@hole_birth active local
cluZFS-1 feature@extensible_dataset active local
cluZFS-1 feature@embedded_data active local
cluZFS-1 feature@bookmarks enabled local
cluZFS-1 feature@filesystem_limits enabled local
cluZFS-1 feature@large_blocks enabled local
cluZFS-1 feature@large_dnode enabled local
cluZFS-1 feature@sha512 enabled local
cluZFS-1 feature@skein enabled local
cluZFS-1 feature@edonr enabled local
cluZFS-1 feature@userobj_accounting active local
cluZFS-1 feature@encryption enabled local
cluZFS-1 feature@project_quota active local
cluZFS-1 feature@device_removal enabled local
cluZFS-1 feature@obsolete_counts enabled local
cluZFS-1 feature@zpool_checkpoint enabled local
cluZFS-1 feature@spacemap_v2 active local
cluZFS-1 feature@allocation_classes enabled local
cluZFS-1 feature@resilver_defer enabled local
cluZFS-1 feature@bookmark_v2 enabled local
cluZFS-1 feature@redaction_bookmarks enabled local
cluZFS-1 feature@redacted_datasets enabled local
cluZFS-1 feature@bookmark_written enabled local
cluZFS-1 feature@log_spacemap active local
cluZFS-1 feature@livelist enabled local
cluZFS-1 feature@device_rebuild enabled local
cluZFS-1 feature@zstd_compress enabled local
cluZFS-1 feature@draid enabled local
cluZFS-1 feature@zilsaxattr disabled local
cluZFS-1 feature@head_errlog disabled local
cluZFS-1 feature@blake3 disabled local
cluZFS-1 feature@block_cloning disabled local
cluZFS-1 feature@vdev_zaps_v2 disabled local

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug [ISSUE] Ticket describing something that isn't working
Projects
Status: Done
Development

No branches or pull requests

6 participants