• 0 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • I think a VPS and moving to NetBird self hosted would be the simplest solution for you. $5 per month gives you a range of options and you can go even lower with things like yearly subscriptions. That way you get around the subdomain issue, you get a proper tunnel and can proxy whatever traffic you want into your home.

    As for control scheme for your home automation you’ll need to come up with something that fits you but I strongly advise against letting users into Home Assistant. You could build a simple web interface that interacts via API with HA, through Node-Red is super simple if it seems daunting to build the API.

    If a RPi 4 is what you’ve got and that’s it then I guess you’re kinda stuck for the time being. Home Assistant is often quite lightweight if you’re not doing something crazy so it runs well on even a RPi 3, same with NAS software for home use, it too works fine on a 3. If SBC is your style my recommendation is to setup an alert on whatever second hand sites operate in your area and pick up a cheap one to allow you to separate things and make the setup simpler.


  • That’s one part of it, but the other is that there’s no proper way to ensure you won’t cause issues down the line and it makes the configuration unclean and harder to maintain.

    It also makes your setup dependent on seemingly unrelated things. Like the certificate for the domain which is some completely different applications problem but will break your Home Assistant setup all the same. That dependency issue can be a nightmare to troubleshoot in some instances, especially when it comes to stuff like authentication. Try doing SSO towards two different applications running on different subpaths on the same domain…


  • ninjan@lemmy.mildgrim.comtoSelfhosted@lemmy.worldI love Home Assistant, but...
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    9 months ago

    I can’t grasp your use case I feel, pretty much all your complaints seem… odd. To me at least.

    First subdomain. I think HA is completely right that proxy with a subpath is basically an anti-pattern that just makes things worse for you and is always a bad idea (with very few exceptions).

    As for your tunnel I don’t know how you’ve set it up and I haven’t used tailscale but them only allowing one domain sounds like a very arbitrary limit, is it something that costs money to add? I use NetBird which I selfhost on my VPS and from there tunnel into my much beefier home setup.

    Then docker in HAOS. The proper way I feel of running HA is for sure HAOS, and also running it in its own VM / or on dedicated hardware. This because you will likely need to couple additional hardware like a stick providing support for more protocols like ZigBee or Matter. It really isn’t a good solution for running all your self hosted stuff, and wasn’t ever intended to be. Running Plex in HA for instance is just a plain bad idea, even if it can be done. As such the need for an external drive seems strange as well. If you need to interact with storage you should set up a NAS and share over SAMBA. All this to say that HA should be one VM/Device, your docker environment another VM.

    As for authentication there are 10k plus contributors to Home Assistant yearly but very few bother to make authentication more streamlined. I would’ve loved OpenID/OAuth2 support natively but there are ways to do so with custom components and in the end I quite strongly feel that if the end-users of your smarthome setup (i.e. the wife and kids) need to login to Home Assistant then you’ve probably got more work to do. Remote controls which interact with HA handle the vast majority of manual interaction and I’ve dabbled with self-hosted voice interfaces for the more complex operations.

    Sorry if this came across as writing you on the nose, that’s not my intention. I just suspect you’re making things harder for yourself and maybe have a strange idea around how to selfhost in general?


  • Well, as someone also self-hosting email I agree with his solutions but he paints a picture of how bad it is that I feel is a bit exaggerated. But then again I host for myself and my family, I suspect it gets a bit different when you have many users and send hundreds of mail per day.

    Only one I’ve had trouble with it Microsoft, they’re the strictest and you need to get some support from them to make it work reliably. Google has an automated service.



  • The switch is very weak hardware wise but also very reliable I feel. For being a handheld device they’re surprisingly tough and cartridges do have a much better chance at longevity than disks so I’d say of all consoles I’d put Switch on the top for longevity and best odds of working well 20 years from now. Do note this is ONLY true of cartridge games. If you have Nintendo eShop games I don’t expect them to work 20 years from now because that eShop might not be around and I’m confident it uses some form of phone home checkin to verify DRM. That is likely fixable but out of scope for this discussion.

    As for Steam Deck / other handheld PCs the games are less likely survive 20 years, games have already started to disappear from Steam (unpopular ones) and I very much doubt every game I have today will be available/playable. Because Steam will be dropping support and not every game is DRM free in ways that mean you can run them once they’re dropped from Steam. The PC handhelds also tend to work very poorly without Internet since Steam wants to phone home from time to time. As for the hardware I think the Steam Deck might last 20 years given it’s Linux based. Stuff like the ROG Ally will be hard to make work due to the outdated Windows on it and the likelihood that you can’t upgrade it and games/steam won’t work without an upgrade.





  • ninjan@lemmy.mildgrim.comtoSelfhosted@lemmy.worldmultimedia manager by series
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    9 months ago

    Doesn’t sound like it’s own “product” to be honest. I’d probably look at an alternative presentation layer that can present what’s in Jellyfin and also supports being the presentation layer for the top solutions for books, comics etc. If nothing like that exists I think there are people that would be interested in a unified media presenter. It doesn’t even need to actually play the media, just link to it.





  • If you can fool the Internet that traffic coming from the VPS has the source IP of your home machine what stops you from assuming another IP to bypass an IP whitelist?

    Also if you expect return communication, that would go to your VPS which has faked the IP of your home machine. That technique would be very powerful to create man in the middle attacks, i.e. intercepting traffic intended for someone else and manipulating it without leaving a trace.

    IP, by virtue of how the protocol works, needs to be a unique identifier for a machine. There are techniques, like CGNAT, that allows multiple machines to share an IP, but really it works (in simplified terms) like a proxy and thus breaks the direct connection and limits you to specific ports. It’s also added on top of the IP protocol and requires specific things and either way it’s the endpoint, in your case the VPS, which will be the presenting IP.


  • Preserve the source IP you say, why?

    The thing is that if you could (without circumventing the standards) do so then that implies that IP isn’t actually a unique identifier, which is needs to be. It would also mean circumventing whitelists / blacklists would be trivial (it’s not hard by any means but has some specific requirements).

    The correct way to do this, even if there might be some hack you could do to get the actual source IP through, is to put the source in a ‘X-Forwarded-For’ header.

    As for ready solutions I use NetBird which has open source clients for Windows, Linux and Android that I use without issues and it’s perfectly self-hostable and easy to integrate with your own IDP.



  • No the scenario a VM protects from is the T110s motherboard/cpu/PSU/etc craps out and instead of having to restore from off-site I can move the drives into another enclosure and then map them the same way to the VM and start it up. Instead of having to wait for new hardware I can have the fileserver up and running again in 30 minutes and it’s just as easy to move it into the new server once I’ve sourced one.

    And in this scenario we’re only running the fileserver on the T110, but we still virtualized it with proxmox because then we can easily move it to new hardware without having to rebuild/migrate anything. As long as we don’t fuck up the drive order or anything like that, then we’re royally fucked.


  • Yes, but in the post they also stated what they were working with in terms of hardware. I really dislike giving the advice “buy more stuff” because not everyone can afford to when selfhosting often comes from a frugal place.

    Still you’re absolutely not wrong and I see value in both our opinions being featured here, this discussion we’re having is a good thing.

    Circling back to the VM thing though, even if I had dedicated hardware, if I would’ve used an old server for a NAS I still would’ve virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.

    Still, your advice to buy a used server is good and absolutely what the OP should do if they want a proper setup and have the funds.


  • Sure, I’m not saying its optimal, optimal will always be dedicated hardware and redundancy in every layer. But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware. It’s not just CPU and RAM needed, it’s also SATA headers and an enclosure. Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated. The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.


  • There’s absolutely no issues whatsoever with passing through hardware directly to a VM. And Virtualized is good because we don’t want to “waste” a whole machine for just a file server. Sure dedicated NAS hardware has some upsides in terms of ease of use but you also pay an, imo, ridiculous premium for that ease. I run my OMV NAS as a VM on 2 cores and 8 GB of RAM (with four hard drives) but you can make do perfectly fine on 1 Core and 2 GB RAM if you want and don’t have too many devices attached / do too many iops intensive tasks.