Ooh, didn’t know libvirt supported clusters and live migrations…
I’ve just setup Proxmox, but as it’s Debian based and I run Arch everywhere else, then maybe I could try that… thanks!
Ah, right. I wasn’t sure about that part.
So, the XMPP server just helps to initiate the connextion between clients then they communicate directly?
Will that work if someone’s at home (inside the network) and talking to someone outside (via the pfSense proxy)?
Thanks. I’ve not found snikket to compare with, but will take a look.
But what about the movies where the actors are typing commands and a visual GUI is moving around and updating on the screen (and making sound effects too).
Isn’t that the best of all worlds? /s
Weird. netin was busy, yet the bottom of the screen implies more outbound traffic (I guess it’s connected the other way around?)
And the log looks like a SMB/CIFS issue… maybe not the interweb?
But, it definitely looks like something got stuck in a loop and triggered a memory leak.
Whatever VM / CT was using CPUs 7 & 10 at the time was the problem… find that and you’ll find the next step down the rabbit hole…
They only see your public IP address (ie your router), so all devices on the private side will appear to be the same source.
So, if your laptop and your server (and anyone else at the same location) are connected to the internet via the same router, then, you’re the same source.
Generally, user interfaces are hard work. If you just want to code, then having a web app means you’re already 50% done.
Actually should be 90% done, but each browser has differences which means more coding… I’m looking at you, Internet Explorer
In my experience both new and used drives either fail within the first few weeks or they go on foreverrrrr…
Do it. The last thing you need during a rebuild is the stress of not knowing how long / other issues with your specific setup.
It’s only "disaster recovery* if you’ve never practiced… orherwise it’s just “recovery”
+1 An old ISP of mine still uses RoundCube for their webmail, so if it’s good enough for them, it’s good enough for self hosters.
Give FairEmail a look…
Still the same, or has it solved itself?
If it’s lots of small files, rather than a few large ones? That’ll be the file allocation table and / or journal…
A few large files? Not sure… something’s getting in the way.
Where are you copying to / from?
Duplicating a folder on the same NAS on the same filesystem? Or copying over the network?
For example, some devices have a really fast file transfer until a buffer files up and then it crawls.
Rsync might not be the correct tool either if you’re duplicating everything to an empty destination…?
Never had an issue with EXT4.
Had a problem on a NAS where BTRFS was taking “too long” for systemD to check it, so just didn’t mount it… bit of config tweaking and all is well again.
I use EXT* and BTRFS where ever I can because I can manipulate it with standard tools (inc gparted).
I have 1 LVM system which was interesting, but I wouldn’t do it that way in the future (used to add drives on a media PC)
And as for ZFS … I’d say it’s very similar to BTRFS, but just slightly too complex on Linux with all the licensing issues, etc. so I just can’t be bothered with it.
As a throw-away comment, I’d say ZFS is used by TrusNAS (not a problem, just sayin’…) and… that’s about it??
As to the OPs original question, I agree with the others here… something’s not right there, but it’s probably not the filesystem.
This is the correct response
Yep, Proxmox itself is very light on resources, so most is available for the VMs / containers.
Just another point… I’ve had some issues with Dell BIOS not respecting the Power On after power loss settings - usually a BIOS upgrade solves that and 99% of Dells still have “just 1 more” update on the website…
I’d also recommend installing Wake on LAN on that Pi too… then if you VPN in from outside you can SSH into the Pi and power on other things that “accidentally” got shutdown.
Yep, now, I initially found the daily journal approach a bit strange, but I use this for work as much as personal stuff, so it actually helps…
My suggestion to your usecase would be to keep a page per “thing” ie server / container / etc and then when you make a change you can just say (on that day’s journal page):
‘’ Setup a backup for [[Server X]] and it’s going to [[NAS2]] (for example) ‘’
Then, on either of those 2 pages you’ll automatically see the link back to the journal page, so you’ll know when you did it…
I think you can disable the journal approach if it’s not useful…
But, the important part is, the files underlying the notes you’re making are in plain text with the page name as the filename, whereas with Joplin you could never find the file…
Also, if you modify the file (live) outside of Logseq, it copes with that and refreshes the content onscreen.
And the links are all dynamic… renamed the NAS? Fine, Logseq will reindex all the pages for you…
Agreed… with a timestamp on the video this was the best advice I had after a housefire.