Trumpites aren’t intelligent people.
Trumpites aren’t intelligent people.
Not enough info. Those are two different things.
Neat
MajorMUD is the only one off the top of my head.
Here you go:
There could probably be some additional refactoring here, but it works for my setup. I’m using default nginx paths, so they probably look different than other installs that use custom stuff like /var/www, etc.
Use it by putting it in a shell script, make it executable, then call it:
sudo scriptName.sh 28.0.1
Replace the version with whatever version you’re upgrading to. I would highly recommend never upgrading to a .0, always wait for at least a .1 patch. I left some sleeps in the when I was debugging a while back, those are safe to remove assuming it works in your setup. I also noticed some variables weren’t quoted, I’m not a bash programmer so there’s probably some consistency issues that could be addressed if someone is OCD.
I’d rather go back to the 90s! The good old times, when devs can lose all that commercial software source code they are developing when the hard drive crashes! And there were no backups! Sorry people who bought licenses! 😂
Narrator: this happened more than once.
Another example of this is what happened with KeePass, then KeePassX, which gave us KeePassXC. Went from single Dev to single Dev to group of devs that were serious about the ecosystem.
Sure! I’ll respond with a link in a bit.
As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.
One of my next steps was hardening my OPNSense router as it handles all the edge network reverse proxy duties, so IDS was in the list. I’m digging into Crowdsec now, it looks like there’s an implementation for OPNsense. Thanks for the tip!
Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?
I wonder what performance impact there would be if you were to move pgsql onto bare metal with enough ram dedicated to caching all of the db data (think: i5 or i7 nuc). That’s going to be my next step with my homelab; I want to migrate everything to a single db host with a lot of RAM and M2 storage and avoid the db process replication I have going on. I have no performance complaints with NC currently, I’m running PHP cache and redis as well as image preview and imaginary.
You absolutely need to move from patch to patch and cannot just do a multiple version jump safely. You also need to validate the configs between versions, especially major release updates or you risk breaking. New features and optimizations happen and you also may need to change our update your reverse proxy configuration on update, or modify db table configuration (just puking this from memory as I’ve had to do it before). I don’t know that there’s automation for each one of those steps.
Because of that, I run nextcloud in a VM and install it from the binary package. I wrote a shell script that handles downloading, moving the files, updating permissions and copying the old config forward, symlinking and doing the upgrade. Then all I have to do is log in as administrator, check out the admin dashboard and make sure there aren’t new things I have to address in the status page. It’s a pain, but my nextcloud uses external db and redis and PHP caching so it’s not an easy out of the box setup. But it’s been solid for a long time once I adopted using this script.
-h for help should list commands, and it’s nested so you can get help for each subcommand. You’ll want to read the Getting Started section.
I’m using the Whipper docker container mostly successfully.
I agree, writing meaningless tests helps nobody and just creates extra work everyone. Unit tests should prove functionality and integration tests act as a vise. Much like you said, if a test breaks in that scenario, then you know something in another class has violated that contract. Good tests will have meaningful names and prove functionality, especially in the backend where it is especially important…
You mention (what I would consider) a bad practice of allowing merges without review. While that should be possible on personal projects with only one dev, strict review guidelines should exist so that nobody can just “push to prod”. CICD is your friend - use it so that staging and prod never break. Again, I’m used to working on systems used by scores of millions of users so I appreciate forced automated validation. Nobody likes dumb breaks on a Friday before vacation.
That does sound like a nightmare. I’m assuming you mean failed test when you say “red boy”, and that made me wonder about PR practices. I’m used to a very strict review environment and fairly quick review turnaround or requests to go over the code. I’ve heard horror stories about people not getting PRs reviewed for days or weeks or some people just plain refusing to review code. I work on microservices that are all usually less than 10,000 lines though, not something with over a million lines of legacy code.
Yep. When I was still doing QA, I saw some pretty terrible practices and tested code that barely built. Now as a software engineer, I have no QA and rely heavily on my own testing practices, namely, unit testing first, integration testing and system/e2e testing. I can’t guarantee the code is bug free and there’s parts I know that could be refactored (tech debt), but I know each piece is tested and does what I expect it to. As corny as it sounds, I’m a big fan of TDD. Unit/IT/E2E don’t replace QA in my opinion, but better set QA up to focus on the bugs that matter and not basic stuff.
😂😂 I got downvoted for that. I guess people dislike writing tests.
yt-dlp and PeerTube.