Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb

  • 1 Post
  • 60 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle


  • At home - Networking

    • 10Gbps internet via Sonic, a local ISP in the San Francisco Bay Area. It’s only $40/month.
    • TP-Link Omada ER8411 10Gbps router
    • MikroTik CRS312-4C+8XG-RM 12-port 10Gbps switch
    • 2 x TP-Link Omada EAP670 access points with 2.5Gbps PoE injectors
    • TP-Link TL-SG1218MPE 16-port 1Gbps PoE switch for security cameras (3 x Dahua outdoor cams and 2 x Amcrest indoor cams). All cameras are on a separate VLAN that has no internet access.
    • SLZB-06 PoE Zigbee coordinator for home automation - all my light switches are Inovelli Blue Zigbee smart switches, plus I have a bunch of smart plugs. Aqara temperature sensors, buttons, door/window sensors, etc.

    Home server:

    • Intel Core i5-13500
    • Asus PRO WS W680M-ACE SE mATX motherboard
    • 64GB server DDR5 ECC RAM
    • 2 x 2TB Solidigm P44 Pro NVMe SSDs in ZFS mirror
    • 2 x 20TB Seagate Exos X20 in ZFS mirror for data storage
    • 14TB WD Purple Pro for security camera footage. Alerts SFTP’d to offsite server for secondary storage
    • Running Unraid, a bunch of Docker containers, a Windows Server 2022 VM for Blue Iris, and an LXC container for a Bo gbackup server.

    For things that need 100% reliability like emails, web hosting, DNS hosting, etc, I have a few VPSes “in the cloud”. The one for my emails is an AMD EPYC, 16GB RAM, 100GB NVMe space, 10Gbps connection for $60/year at GreenCloudVPS in San Jose, and I have similar ones at HostHatch (but with 40Gbps instead of 10Gbps) in Los Angeles.

    I’ve got a bunch of other VPSes, mostly for https://dnstools.ws/ which is an open-source project I run. It lets you perform DNS lookup, pings, traceroutes, etc from nearly 30 locations around the world. Many of those are sponsored which means the company provides them for cheap/free in exchange for a backlink.

    This Lemmy server is on another GreenCloudVPS system - their ninth birthday special which has 9GB RAM and 99GB NVMe disk space for $99 every three years ($33/year).



  • I’d recommend building your own server rather than buying an off-the-shelf NAS. The NAS will have limited upgrade options - usually, if you want to make it more powerful in the future, you’ll have to buy a new one. If you build your own, you can freely upgrade it in the future - add more memory (RAM), make it faster by replacing the CPU with a better one, etc.

    If you want a small one, the Asus Prime AP201 is a pretty nice (and affordable!) case.


  • dan@upvote.autoSelfhosted@lemmy.worldRecommendation for NAS
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    Modern clients support most of the modern codecs, so codec support isn’t as bad as in the old days when we had to use sketchy codec packs.

    I mentioned the location because the primary reason to transcode is that you don’t have enough bandwidth to stream the original file. That’s not an issue over a LAN.


  • I personally prefer Docker over LXC since the containers are essentially immutable. You can completely delete and recreate a container without causing issues. All your data is stored outside the container in a Docker volume, so deleting the container doesn’t delete your volume. Your docker-compose describes the exact state of the containers (as long as you use version numbers rather than tags like latest)

    Good Docker containers are “distroless” which means it only contains the app and the bare minimum dependencies for the app to run, without any extraneous OS stuff in it. LXC containers aren’t as light since as far as I know they always contain an OS.



  • I like Unraid… It has a UI for VMs and LXC containers like Proxmox, but it also has a pretty good Docker UI. I’ve got most things running on Docker on my home server, but I’ve also got one VM (Windows Server 2022 for Blue Iris) and two LXC containers. (LXC support is a plugin; it doesn’t come out-of-the-box)

    Docker with Proxmox is a bit weird, since it doesn’t actually support Docker and you have to run Docker inside an LXC container or VM.



  • What’s your actual end goal? What are you trying to protect against? Do you only want certain systems on your network to be able to access your apps? There’s not really much of a point of a firewall if you’re just going to open up the ports to the whole network.

    If you want it to be more secure then I’d close all the ports except for 22 (SSH) and 443 (HTTPS), stick a reverse proxy in front of everything (like Nginx Caddy, Traefik, etc), and use Authentik for authentication, with two-factor authentication enabled. Get a TLS certificate using Let’s Encrypt and a DNS challenge. You have to use a real domain name for your server, but the server does not have to be publicly accessible - Let’s Encrypt works for local servers too.

    The LinuxServer project has a Docker image called “SWAG” that has Nginx with a bunch of reverse proxy configs for a bunch of common apps. Might be a decent way to go. The reverse proxy should be on the same Docker network as the other containers, so that it can access them directly even though you won’t be exposing their ports any more.

    Authentik will give you access controls (eg to only allow particular users to access particular apps), access logs for whenever someone logs in to an app, and two-factor auth for everything. It uses OIDC/OAuth2 or SAML, or its own reverse proxy for apps that don’t support proper auth.


  • dan@upvote.autoSelfhosted@lemmy.worldSecond hand disks?
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    7 months ago

    If you keep an eye out for sales, you can get new drives for not much more than used. I got two Seagate Exos X20 20TB drives for around US$240 each on sale. One from Newegg and one from ServerPartDeals.

    Regardless of if you buy new or used, buy the drives from multiple different suppliers as it makes it likely that they’ll come from two different batches. You don’t want an array where all drives came from the same batch since it increases risk (if there was a manufacturing issue with that batch, it’s possible all drives will fail in the same way)


  • I used to use mdadm, but ZFS mirrors (equivalent to RAID1) are quite nice. ZFS automatically stores checksums. If some data is corrupted on one drive (meaning the checksum doesn’t match), it automatically fixes it for you by getting the data off the mirror drive and overwriting the corrupted data. The read will only fail if the data is corrupted on both drives. This helps with bitrot.

    ZFS has raidz1 and raidz2 which use one or two disks for parity, which also has the same advantages. I’ve only got two 20TB drives in my NAS though, so a mirror is fine.



  • dan@upvote.autoSelfhosted@lemmy.worldPlex for books?
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    The epub is emailed to our Kindle.

    Amazon have been making this harder and harder. Originally you could define an allowlist of senders, and any emails from those senders would go to the Kindle. Then they changed it so you have to click a link in an email to approve it. Now, you have to go to Amazon, find the Kindle content page (which is well hidden), and click a button to approve it.

    If you know a workaround for that then I’d love to hear it.



  • they first talk to the BE to coordinate the firewall piercing on both ends.

    This is the NAT hole punching I mentioned in my post, which as far as I know uses the relay servers (which are open-source) and not their proprietary server. Sometimes systems can reach each other directly (for example, if you forward port 41641) in which case they can directly connect and you don’t need a relay at all.



  • Some of them just find it convenient that CF is registrar, DNS provider and sets up reverse proxy

    You should never put all your eggs in one basket. Using one company for all three of these essentially gives them full control of your domain.

    It’s a best practice to use separate companies for registrar and website/proxy. If there’s ever some sort of dispute about the contents of the site, you can change the DNS to point to a different host. That’s not always possible when the same company handles both.