• 3 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: July 29th, 2023

help-circle


  • For light touch monitoring this is my approach too. I have one instance in my network, and another on fly.io for the VPSs (my most common outage is my home internet). To make it a tiny bit stronger, I wrote a Go endpoint that exposes the disk and memory usage of a server including with mem_okay and disk_okay keywords, and I have Kuma checking those.

    I even have the two Kuma instances checking each other by making a status page and adding checks for each other’s ‘degraded’ state. I have ntfy set up on both so I get the Kuma change notifications on my iPhone. I love ntfy so much I donate to it.

    For my VPSs, this is probably not enough, so I am considering the more complicated solutions (I’ve started wanting to know things like an influx of fali2ban bans etc.)


  • thirdBreakfast@lemmy.worldtoSelfhosted@lemmy.worldKavita runners
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    7 months ago
    - fiction
        - Abbott, Edwin A_
            - Flatland
                - Flatland - Edwin A. Abbott.epub
                - Flatland - Edwin A. Abbott.jpg
                - Flatland - Edwin A. Abbott.opf
        - Achebe, Chinua
            - Things Fall Apart
                - Things Fall Apart - Chinua Achebe.epub
                - Things Fall Apart - Chinua Achebe.jpg
                - Things Fall Apart - Chinua Achebe.opf
    

    So in each directory that I use to delineate a library, I have a subdirectory for each author (in sort order form). Within each author subdirectory is a subdirectory for each book, with just the title, then the book with (edit - the anti-injection code mangled how I was trying to say the book file name. it’s [book name]-[author].[extension])

    I didn’t invent this, it’s just what Calibre spits out. When I buy a new book, I ingest it into Calibre, fix any metadata and export it to the NAS. Then I delete the Calibre library - I’m just using it to do the neatening up work.




  • Yo dawg, I put most of my services in a Docker container inside their own LXC container. It used to bug me that this seems like a less than optimal use of resources, but I love the management - all the VM and containers on one pane of glass, super simple snapshots, dead easy to move a service between machines, and simple to instrument the LXC for monitoring.

    I see other people doing, and I’m interested in, an even more generic system (maybe Cockpit or something) but I’ve been really happy with this. If OP’s dream is managing all the containers and VM’s together, I’d back having a look at Proxmox.


  • This is where I landed on this decision. I run a Synology which just does NAS on spinning rust and I don’t mess with it. Since you know rsync this will all be a painless setup apart from the upfront cost. I’d trust any 2 bay synology less than 10 years old (I think the last two digits in the model number is the year), then if your budget is tight, grab a couple 2nd hand disks from different batches (or three if you budget stretches to it,).

    I also endorse u/originalucifer’s comment about a real machine. Thin clients like the HP minis or lenovos are a great step up.