**beep ** bop.

  • 4 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • I tried opn/ pfsense, VyOS (the rolling one. Stable is paid only), and a couple commercial options. Surprisingly not a single free/foss option can do IPv6 properly (I was looking specifically for prefix delegation for downstream routers). Cashed out for a single RouterOS CHR license and never bothered since.

    But otherwise I tend to like VyOS. the rolling releases as the only free option make it somewhat questionable for something more serious though.


  • I’d be curious to see comparison with Logseq. As it’s rightly mentioned, there are thousands of note taking apps and I’m not quite sure I see the selling point of SB. I really love the idea of notes as a database, but the query langauage seems subpar, more akin to obsidian’s dataview than the overwhelming power of tiddlywiki’s filters or Logseq’s queries.

    I went from evernote to tiddlywiki to Obsidian to Logseq and somewhat stuck here now because I got the powerful queries in a very neat UI. With the market oversaturated as it is, I’d be nice to see what Silverbullet brings to the game that others don’t, what are the distinguishing features.



  • I went for a much simpler approach lately as I downscaled my hardware for efficiency.

    I run NixOS on the bare metal. It gives the system management a declarative approach, just like kubernetes would. On top of that, I run libvirt as a hypervisor. In other scenarios I’d use tinyvmm and cloud-hypervisor, but I found qemu way better for the variety of homelab workloads and libvirt is pretty straightforward.

    Some vms have pci passthrough, e.g. my routeros vm gets a bunch of NICs directly, some have various funny network topology. Libvirt used to be a pain in that regard, but it’s actually fine with NixOS because you manage both sides of the networking stack in declarative configuration.

    I run NixOS on the vms too (now for the sake of easy upgrades), and I have a bit of a split between running services natively (systemd is very good about “containerizing” things nowadays) and using docker (mostly because of laziness, e.g. Elastiflow was easier to deploy this way). Finally, I have a single dokerized Ubuntu that’s more like a VM (as in, I never had a dockerfile for it, it’s fully stateful) running the matter home automaton bits because I gave up on properly containing the matter python stack and went for an easy way out.

    Now, a word about alternatives.

    I used to run Ubuntu. No more. Upgrading the OS is always a huge pain even if everything is in docker. I want my OS to be managed in a config file and be able to easily roll back to the previous state. I used to run k3s, but even though it is much thinner than k8s, it is still very much ram hungry and I just don’t want to pay for that. Besides, complex networking is often non-trivial due to how its networking works, and multus is a world of pain. I used to run different hypervisors for the VMs (kubevirt, tinyvmm, a bunch others). I went way back to libvirt mostly because it’s straightforward in tuning very specific qemu bits I cared for in the homelab. I have some cpu overprovisioning, so I want to make my quotas set up extremely precisely, sacrificing the right workloads.


  • I’ll make a note here that a firewall is useful for internal traffic, too. Those IoT devices can get pretty annoying, so you’d want to e.g. drop your cheap webcams into a VLAN and disallow them from talking to enjoying but their cloud, and especially the other VLANs, or isolate Alexa capable device so it won’t try to figure what else you got there in your house over mDNS (it will).

    A managed switch would do nicely. Having isolated ports on the switch (and the wifi AP) is also great if you want to make sure the specific device will only talk to the gateway and not its peers.


  • Here’s how it works: unifi devices need to communicate with the controller over tcp/8080 to maintain their provisioned state. By default, the controller adopts the device with http://controller-ip:8080/inform, which means that if you ever change the controller IP, you’ll must adopt your devices again.

    There are several other ways to adopt the device, most notably using the DHCP option 43 and using DNS. Of those, setting up DNS is generally easier. You’d provision the DNS to point at your controller and then update the inform address on all your devices (including the USG).

    Now, there’s still a problem of keeping your controller IP and DNS address in sync. Unifi, generally, doesn’t do DNS names for its DHCP leases, and devices can’t use mDNS, so you’ll have to figure a solution for that. Or, you can just cut it short and make sure the controller has a static IP―not a static DHCP lease, but literally, a static address. It allows your controller to function autonomously from USG, as long as your devices don’t reach to it across VLANs.


  • Unifi is specific about expecting the controller address to not change. You have several options: There’s the “override controller address” setting, which you can use to point the devices at a dns name, instead of an ip address. The dns can then track your controller. It doesn’t exactly solve your issue, though, as USG doesn’t assign dns names to dynamic allocations.

    Another option is to give the controller a static IP allocation. This way, in case you reboot everything, USG will come up with the latest good config, then will (eventually) allocate the IP for controller, and adopt itself.

    Finally, the most bulletproof option is to just have a static IP address on the controller. It’s a special case, so it’s reasonable to do so. Just like you can only send NetFlow to a specific address and have to keep your collector in one place, basically.

    I’d advise against moving dhcp and dns off unifi unless you have a better reason to do so, because then you lose a good chunk of what unifi provides in terms of the network management. USG is surprisingly robust in that regard (unlike UDMs), and can even run a nextdns forwarding resolver locally.









  • That’s a great example! I am actually aware of this case. Mind that the article quotes:

    Meta’s sanction is for breaching conditions set out in the pan-EU regulation governing transfers of personal data to so-called third countries (in this case the US) without ensuring adequate protections for people’s information.

    And we discuss the GDPR in the context of the data requests retrieval in here. So you’re absolutely correct in that they suck about following it to the letter, but I don’t think this particular one applies to this discussion.


  • I would suggest you to sent a GDPR request to facebook (if you’re in a position to be covered by GDPR and have a facebook account) and to your lemmy instance (being lemmy.world).

    Facebook will have a bunch more data on you, undoubtedly, but it will take no time for them to process the request.

    Lemmy? Good luck with that. First try finding their privacy page and see what data they actually collect on you. Whom they send it to process. Try reaching the admins maybe? Lemmy has no tooling whatsoever to help with that so they will have to get their hands dirty with postgresql, too.

    I like fb no more than anyone in this thread but let’s be realistic. They do have a much better story of complying with GDPR specifically than anything in fediverse.




  • At that point it’s easier to run your own instance, I guess? As it stands now, it’s not trivial to scale neither the db storage (postgres) nor the backend (lemmy).

    I don’t think that users’ donating parts of compute is a way to go, honestly. You’ll have to think about bad actors (such users will effectively be instance admins for some subset of data), and it might quickly deteriorate into the weird crypto world (aka let’s use blockchain as a storage because no one can be trusted to really count the upvotes).

    Unfortunately, it’s a very tricky issue to solve. I’d say donating to your instance so that its operators have enough finances to support it is the way.