Jellyfin still doesn’t have a good solution for music. None of the players that support it are anywhere near as good as Plexamp.
Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb
Jellyfin still doesn’t have a good solution for music. None of the players that support it are anywhere near as good as Plexamp.
At home - Networking
Home server:
For things that need 100% reliability like emails, web hosting, DNS hosting, etc, I have a few VPSes “in the cloud”. The one for my emails is an AMD EPYC, 16GB RAM, 100GB NVMe space, 10Gbps connection for $60/year at GreenCloudVPS in San Jose, and I have similar ones at HostHatch (but with 40Gbps instead of 10Gbps) in Los Angeles.
I’ve got a bunch of other VPSes, mostly for https://dnstools.ws/ which is an open-source project I run. It lets you perform DNS lookup, pings, traceroutes, etc from nearly 30 locations around the world. Many of those are sponsored which means the company provides them for cheap/free in exchange for a backlink.
This Lemmy server is on another GreenCloudVPS system - their ninth birthday special which has 9GB RAM and 99GB NVMe disk space for $99 every three years ($33/year).
FYI, docker-compose
is the legacy version that was deprecated a few years ago and no longer receives updates. docker compose
(with a space instead of a hyphen) is what you should be using these days.
I’d recommend building your own server rather than buying an off-the-shelf NAS. The NAS will have limited upgrade options - usually, if you want to make it more powerful in the future, you’ll have to buy a new one. If you build your own, you can freely upgrade it in the future - add more memory (RAM), make it faster by replacing the CPU with a better one, etc.
If you want a small one, the Asus Prime AP201 is a pretty nice (and affordable!) case.
Modern clients support most of the modern codecs, so codec support isn’t as bad as in the old days when we had to use sketchy codec packs.
I mentioned the location because the primary reason to transcode is that you don’t have enough bandwidth to stream the original file. That’s not an issue over a LAN.
I personally prefer Docker over LXC since the containers are essentially immutable. You can completely delete and recreate a container without causing issues. All your data is stored outside the container in a Docker volume, so deleting the container doesn’t delete your volume. Your docker-compose
describes the exact state of the containers (as long as you use version numbers rather than tags like latest
)
Good Docker containers are “distroless” which means it only contains the app and the bare minimum dependencies for the app to run, without any extraneous OS stuff in it. LXC containers aren’t as light since as far as I know they always contain an OS.
Just make sure you install the virtio drivers.
I like Unraid… It has a UI for VMs and LXC containers like Proxmox, but it also has a pretty good Docker UI. I’ve got most things running on Docker on my home server, but I’ve also got one VM (Windows Server 2022 for Blue Iris) and two LXC containers. (LXC support is a plugin; it doesn’t come out-of-the-box)
Docker with Proxmox is a bit weird, since it doesn’t actually support Docker and you have to run Docker inside an LXC container or VM.
adding something like authelia.
I used to use Authelia, but Authentik is nicer since it’s mostly configured through a web UI. It also supports SAML for services that don’t support OpenID Connect. It also has a proxy mode like Authelia, but that’s not recommended if the service has proper SSO support. There’s just a bit of an initial learning curve.
What’s your actual end goal? What are you trying to protect against? Do you only want certain systems on your network to be able to access your apps? There’s not really much of a point of a firewall if you’re just going to open up the ports to the whole network.
If you want it to be more secure then I’d close all the ports except for 22 (SSH) and 443 (HTTPS), stick a reverse proxy in front of everything (like Nginx Caddy, Traefik, etc), and use Authentik for authentication, with two-factor authentication enabled. Get a TLS certificate using Let’s Encrypt and a DNS challenge. You have to use a real domain name for your server, but the server does not have to be publicly accessible - Let’s Encrypt works for local servers too.
The LinuxServer project has a Docker image called “SWAG” that has Nginx with a bunch of reverse proxy configs for a bunch of common apps. Might be a decent way to go. The reverse proxy should be on the same Docker network as the other containers, so that it can access them directly even though you won’t be exposing their ports any more.
Authentik will give you access controls (eg to only allow particular users to access particular apps), access logs for whenever someone logs in to an app, and two-factor auth for everything. It uses OIDC/OAuth2 or SAML, or its own reverse proxy for apps that don’t support proper auth.
If you keep an eye out for sales, you can get new drives for not much more than used. I got two Seagate Exos X20 20TB drives for around US$240 each on sale. One from Newegg and one from ServerPartDeals.
Regardless of if you buy new or used, buy the drives from multiple different suppliers as it makes it likely that they’ll come from two different batches. You don’t want an array where all drives came from the same batch since it increases risk (if there was a manufacturing issue with that batch, it’s possible all drives will fail in the same way)
I used to use mdadm, but ZFS mirrors (equivalent to RAID1) are quite nice. ZFS automatically stores checksums. If some data is corrupted on one drive (meaning the checksum doesn’t match), it automatically fixes it for you by getting the data off the mirror drive and overwriting the corrupted data. The read will only fail if the data is corrupted on both drives. This helps with bitrot.
ZFS has raidz1 and raidz2 which use one or two disks for parity, which also has the same advantages. I’ve only got two 20TB drives in my NAS though, so a mirror is fine.
Calibre-web supports OPDS and uses the Calibre database.
The epub is emailed to our Kindle.
Amazon have been making this harder and harder. Originally you could define an allowlist of senders, and any emails from those senders would go to the Kindle. Then they changed it so you have to click a link in an email to approve it. Now, you have to go to Amazon, find the Kindle content page (which is well hidden), and click a button to approve it.
If you know a workaround for that then I’d love to hear it.
they will need to adopt IPv6!
And find your IP in a /56 or /64 range (depending on what your ISP gives you). Good luck.
they first talk to the BE to coordinate the firewall piercing on both ends.
This is the NAT hole punching I mentioned in my post, which as far as I know uses the relay servers (which are open-source) and not their proprietary server. Sometimes systems can reach each other directly (for example, if you forward port 41641) in which case they can directly connect and you don’t need a relay at all.
Get a $15/year VPS and run your own tunnel using Wireguard.
Some of them just find it convenient that CF is registrar, DNS provider and sets up reverse proxy
You should never put all your eggs in one basket. Using one company for all three of these essentially gives them full control of your domain.
It’s a best practice to use separate companies for registrar and website/proxy. If there’s ever some sort of dispute about the contents of the site, you can change the DNS to point to a different host. That’s not always possible when the same company handles both.
It’s possible it’s a file format issue. A video using a common format like H264 or H265 should work fine though. What format is the file and what codecs does it use?
I know everyone says to use Proxmox, but it’s worth considering xcp-ng as well.