• 11 Posts
  • 281 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle



  • LXD/Incus provides a management and automation layer that really makes things work smoothly essentially replacing Proxmox. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes) and those are just a few things you can do with it and not with pure KVM/libvirt. Also has a WebUI for those interested.

    A big advantage of LXD is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

    Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. It plays on the land of, let’s say, Proxmox and I can guarantee you that most people running it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no holding back critical fixes for paying users and way less overhead.


  • Re incus: I don’t know for sure yet. I have an old LXD setup at work that I’d like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.

    Maybe you should consider consolidating into Incus. You’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potential issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?


  • Does someone know a tool that creates a Certificate Authority and signs certificates with that CA? (…) just a tool that spits out the certificates and I manage them that way, instead of a whole service for managing certs.

    Yes, written in go, very small and portable: https://github.com/FiloSottile/mkcert.

    Just be aware of the risks involved with running your own CA.

    You’re adding a root certificate to your systems that will effectively accept any certificate issued with your CA’s key. If your PK gets stolen somehow and you don’t notice it, someone might be issuing certificates that are valid for those machines. Also real CA’s also have ways to revoke certificates that are checked by browsers (OCSP and CRLs), they may employ other techniques such as cross signing and chains of trust. All those make it so a compromised certificate is revoked and not trusted by anyone after the fact.

    Why not Let’s Encrypt?

    that’s fair but if your only concern is about “I do not want any public CA to know the domains and subdomains I use” you get around that.

    Let’s Encrypt now allows for wildcard so you can probably do something like *.network.example.org and have an SSL certificate that will cover any subdomain under network.example.org (eg. host1.network.example.org). Or even better, get a wildcard like *.example.org and you’ll be done for everything.

    I’m just suggesting this alternative because it would make your life way easier and potentially more secure without actually revealing internal subdomains to the CA.

    Another option is to just issue certificates without a CA and accept them one at the time on each device. This won’t expose you to a possibly stolen CA PK and you’ll get notified if previously the accepted certificate of some host changes.

    openssl req -x509 -nodes -newkey rsa:2048 \
    -subj "/CN=$DOMAIN_BASE/O=$ORG_NAME/OU=$ORG_UNIT_NAME/C=$COUNTRY" \
    -keyout $DOMAIN_BASE.key -out $DOMAIN_BASE.crt -days $OPT_days "${ALT_NAMES[@]}"
    



  • Am I mistaken that the host shouldn’t be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what’s the best practice here?

    Passing the PCI network card / device to the VM would make things more secure as the host won’t be configured / touching the network card exposed to the WAN. Nevertheless passing the card to the VM would make things less flexible and it isn’t required.

    I think there’s something wrong with your setup. One of my machines has a br0 and a setup like yours. 10-enp5s0.network is the physical “WAN” interface:

    root@host10:/etc/systemd/network# cat 10-enp5s0.network
    [Match]
    Name=enp5s0
    
    [Network]
    Bridge=br0 # -> note that we're just saying that enp5s0 belongs to the bridge, no IPs are assigned here.
    
    root@host10:/etc/systemd/network# cat 11-br0.netdev
    [NetDev]
    Name=br0
    Kind=bridge
    
    root@host10:/etc/systemd/network# cat 11-br0.network
    [Match]
    Name=br0
    
    [Network]
    DHCP=ipv4 # -> In my case I'm also requesting an IP for my host but this isn't required. If I set it to "no" it will also work.
    

    Now, I have a profile for “bridged” containers:

    root@host10:/etc/systemd/network# lxc profile show bridged
    config:
     (...)
    description: Bridged Networking Profile
    devices:
      eth0:
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
    (...)
    

    And one of my VMs with this profile:

    root@host10:/etc/systemd/network# lxc config show havm
    architecture: x86_64
    config:
      image.description: HAVM
      image.os: Debian
    (...)
    profiles:
    - bridged
    (...)
    

    Inside the VM the network is configured like this:

    root@havm:~# cat /etc/systemd/network/10-eth0.network
    [Match]
    Name=eth0
    
    [Link]
    RequiredForOnline=yes
    
    [Network]
    DHCP=ipv4
    

    Can you check if your config is done like this? If so it should work.







  • how dial-up worked, and saw that it was still possible to set up in modern-day; so it got me wondering what the privacy implications would be if I hypothetically were to use it. I imagine it would be terrible!

    Actually we would be way better if anyone was still using a 56k dial-up. Just think about it, with 56k websites couldn’t store 2000 different cookies and run 30000 XHR requests to 3rd party analytics companies as it would take more time to get them than actually load the content. :)

    Either way the fact that you’re running on a dial-up doesn’t mean your connection isn’t secure, PPPoE can be used in the same way is used for FTTH links and it allows IP security features like authentication and encryption to be implemented.


  • I do have an argument: https://lemmy.world/comment/7648533

    Any free private tracker worth your time has DHT/PEX disabled thus making their torrents invisible for your typical govt / private entity searching for pirates. If those torrents aren’t public and can’t be searched indexed via DHT then the ISP or whatever knows you’re using the bittorrent protocol but they don’t know for what content. This particularly correct if you use sane settings in your torrent clients such as a blocklist + requiring encryption for all connections.

    If you do those simple things and a use a private tracker you trust then your ISP/Govt can’t point fingers at you, they’ve no way of knowing what you’re downloading.


  • https://iknowwhatyoudownload.com/en/peer/ – Plug your IP into that. Private tracker torrents are still visible to the public.

    What you’re saying isn’t correct, at least for properly configured private trackers and clients.

    I did try that website and that’s the thing, the only torrents that show up are public ones. Torrents from private trackers like iptorrents are not showing on that list as expected. They don’t show, because they can’t access them, just read their about page and you’ll understand why:

    Our system collects torrent files in two ways: parsing torrent sites and listening DHT network

    Any private tracker worth your time has DHT/PEX disabled for their torrents because if they didn’t then the torrents were essentially public.




  • Hints? Don’t use Docker for your own sake. Why would you, you’re already running LXC containers, just setup whatever you need inside those and you’re good to go with way less overhead and bloat.

    While you’re at that, did you know that the creators of LXC have a solution called LXD/Incus that is way better at managing LXD containers and can also create and VMs? For what’s worth is a 100% free and open-source solution that can be installed on any clean Debian 12 setup from their repository and doesn’t require 1000 different daemons like Proxmox does nor does it constantly nags for a license. :)