-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Systemd service doesnt trigger on snapshot creation, errors on install #14
Comments
This is a fresh installation? |
Yes, it's fresh installation |
Generated writable snapshots are stored in the /root/.refind-btrfs directory because you've set the "modify_read_only_flag" option to false (default value). I think this has something to do with having a separate /boot partition - the logic of matching boot stanzas with snapshots and the root device is different in this case. Recent changes might have broken it, somehow. I'll have to try and reproduce it locally, later today. The absence of unit tests is starting to become a real nuisance... |
Here: root_vmlinuz-linux.conf Yes, i do have separate /boot partition (for luks2 to work properly)
|
I think that this line might have something to do with the issue you're experiencing. Would it be possible for you to comment it out (or delete it altogether) and try running the refind-btrfs script? It's just so I can narrow down the cause of the problem. I'm not entirely sure if that's going to work on its own though, you should probably also delete these two files:
I'd also stop the systemd service although it probably just exits on its own (unexpected exception). |
I deleted these two python files, stopped service and commented these lines. |
Thank you, that's really helpful information. Fixing it should be easier now. You could try running the service now and seeing whether or not it works after a snapshot is automatically taken by Snapper. |
Doesn't seem like its generating new stanza after creating snapshot... (the service) |
Hm, an error should be visible in the journal via |
No errors |
And a new snapshot was created under /.snapshots? I don't understand why there's no output in the journal at all. |
Yes, they are created in that folder and visible by |
What exactly does |
After last entry, I did create snapshots 2-3 times |
If it does work it should output various information to the journal, not just that it started. Btw, I also noticed on my own machine that the filesystem events are not raised and/or caught by (not sure) the Watchdog library which I'm using for this feature. It happens exactly at midnight but not every day, pretty weird. The official package repo now has a pretty outdated version of this library. I'm the one who flagged it, if I remember correctly. |
Gonna install git version ASAP) |
You mean this one? Hm, maybe that does the trick but I'm not too sure... |
Got this error after making snapshot
|
That version is even more outdated, it seems (breaking API changes) but at least something happened, I guess. |
So, whats next thing to do, how can I help? |
Well, currently nothing. I'll try to fix the issue regarding the error you've seen (script's output) and then we'll just have to see when I make a new release. I'll also check the service's output on my own machine today even though it worked just fine yesterday. |
I've found the culprit (I hope so), pushed the fix and created a new release. Please check it out when you can and report back here. |
I just installed new version, got warning about not found cpython file but after installation it got installed just fine |
This is the output I get with a freshly installed version and a snapshot created at 20:00 PM (18:00 UTC): I really have no idea why it doesn't work on your machine. What is so special about your setup, I wonder? You're using Snapper as am I but that shouldn't really matter. Damn, I've just noticed that Watchdog currently has 142 open issues... I wish there was an alternative library, at this point. |
I use luks2, i have two additional read-only disks (old disks that i currently use as migration storage to new), they are luks2 too both and one of them have 2 efistubs on fat32 boot partition, which i can boot into. |
The same author has another guide, as well. I don't know why he felt the need to have more than one. I mean it's entirely possible to use inotify directly and run the script when needed but I didn't want to go with that option because Watchdog already utilizes it on Linux and it's been working more or less fine for me and other users, it seems. |
Hm, I've found this interesting piece of information. There is apparently a maximum amount of watchers allowed by inotify. Perhaps you could try increasing it and see what happens then? Mine is set at 524288, try setting it to 1048576 and rebooting the system. EDIT: This setting can be changed temporarily, as well (the first link). |
I increased that watchdog limit via sysctl and it didn't work either, same 4 lines in status... As of now i decided to make at least temporary hack to run refind-btrfs on startup and shutdown: refind-btrfs-create.service
|
Pretty weird, this issue definitely requires further investigation. |
I have the same problem, increase the inotifywait, and check the ones in use.
|
Interesting, it could be caused by either one of these:
Arch is still using the old 0.10.x watchdog's branch intended for what is now a prehistoric Python version instead of the newer 1.0.x branch intended for Python 3.6 and newer. |
if I had too many snapshots, the cleanup service had stopped. |
Because of this, i need to manually run
refind-btrfs
each time i want to populate refind with snapshots.All
journalctl -u refind-btrs -b
says is this:And also when i run this, it gives me such error.
configs:
refind-btrfs.conf
refind.conf
The text was updated successfully, but these errors were encountered: