Could always go to excessive measures, your own cloud hosted VPN node to hop to an external provider or similar. Unless you’re a major target nobody wants to deal with multiple providers and jurisdictions.
Some dingbat that occasionally builds neat stuff without breaking others. The person running this public-but-not-promoted instance because reasons.
Could always go to excessive measures, your own cloud hosted VPN node to hop to an external provider or similar. Unless you’re a major target nobody wants to deal with multiple providers and jurisdictions.
It works so long as you’re not trying to create separate networks. When/if you decide to start with some vlan madness and such the AP likely won’t work for that, unless it’s fancy and can do multiple SSID on separate clans, but most WiFi/router combos don’t go that far.
Basically the new firewall/router box becomes the boss of everything done ng DHCP, likely DNS relaying, and all the monitoring. Simple and efficient, just wouldn’t go hosting public services with the setup since there’s no ‘DMZ’ to keep it separate from you personal devices.
If I’m picturing the gear right, putting the TP into AP mode would just make it a client of the network that would then serve as your WiFi and the new box could be set up as the router/gateway for both the TP and the other clients formerly plugged into the TP.
Usually, changing the mode from router to AP would keep the LAN side active as an unmanaged switch, and may even add the wan port to it. So if all above holds true go modem, Celeron (opnsense), TP (LAN to LAN) and then plug the remaining Ethernet either into the TP or the other LAN ports on the Celeron box, both should be the same local network.
The disk size also doesn’t have to match. Creating a drive array for ZFS is a 2 phase thing:
Creating a series of ‘vdev’ which can be single disks or mirrored pairs,
Then you combine the vdevs into a ‘zpool’ regardless of their sizes and it all becomes one big pool, and it acts somewhere between raid and disk spanning where it reads and writes to all but once any given vdevs is full it just stops going there. I currently have vdevs sets in 12, 8, 6 and three 4 TB sizes for a total of 38 TB of space minus formatting loss.
Example how I have it laid out, it’d be ideal to have them all the same size to balance it better, but it’s not required.
No, currently univention corporate server (UCS), but I’ll give those a look since I’ve been eyeing a replacement for a while due to some long standing vulns that I’m keen to be rid of.
https://xigmanas.com/xnaswp/download/
For a pure NAS purpose this is my go to. Serves drives, supports multiple file systems, and has a few extras like a basic web server and RSync built into a nice embedded system. The OS can run on a USB stick and manage the drives separately for the data.
On the ZFS front, a common misconception is that it eats a ton of RAM. What it does actually is use idle RAM for the ‘arc’ which caches the most frequent and/or most recently used files to avoid pulling them from disk. That RAM though will get dumped and made available to the system on demand though if for whatever reason the OS needs it. Idle RAM is wasted RAM so it’s a nice thing to have available.
The other options of making the containers dependent on mounts or similar are all really better, but a simple enough one is to use SMB/CIFS rather than NFS. It’s a lot more transactional in design so the drive vanishing for a bit will just come back when the drive is available. It’s also a fair bit heavier on the overhead.
Using NFSv4 seems to work in similar fashion without the overhead though I haven’t dug into the exact back and forth of the system to know how it differs from the v3 to accomplish that.
It has all the needed parts plus an interesting plug in app ecosystem if you like that kind of thing. My only real gripe with it though is a pile of high sev vulnerabilities that are picked up by a scanning engine that haven’t been fixed for a long time, so I’m reluctant to recommend it unless you have a solid security/segmentation setup in place.
Not AD proper but a compatible controller Linux distro to tie the desktops to, plus common credentials across several services. Just simplifies things not having a dozen different logins.
Currently a R730XD, but it has been run on plenty of other things down to a 1.3Ghz/4GB IPX box at the beginning. It’s pretty stripped down to run as an embedded system rather than a full server OS.
I’ve used this on a variety of boxes since it was called freenas back before a fork long ago. A NAS doesn’t need a whole lot of power in itself if the job is just to store and offer disk space. My current setup is in a full 2U rack server with 14 drives (12 spinning, 2 ssd) and it averages 169 watts. If you do the transcoding on whatever box is actually accessing the data it can save on the need for extra compute on the NAS.
If you have opnsense in front of it all, using a DDNS client to register the public IP would be step one, then using haproxy for an inbound proxy rather than port forwarding the traffic. That way you could have ‘owncloud.your.domain’ and ‘otherservice.your.domain’ hosted on the same IP using 80/443 rather than having to forward random ports in.
The right way is the way that works best for your own use case. I like a 3 box setup, firewall, hypervisor, nas, with a switch in between. Let’s you set up vlans to your heart’s content, manage flows from an external point (virtual firewalls are fine, but if it’s the authoritative DNS/DHCP for your net it gets a bit chicken and egg when it’s inside a vm host), and store the actual data like vids/pics/docs on the NAS that has just that one job of storing the files, less chance of borking it up that way.
It serves as a nice aggregate hub for a lot of household tasks, although I mostly use it for the lists and recipes at this time.
I’ve used Homechart for a while, does lists, plans, budgets and a bunch of other stuff.
There’s two avenues for opening an encrypted file, attacking the password/access method or attacking the encryption itself.
Generally using a basic zip-lock is not going to have a second factor, a rate limiting mechanism, anything really other than the password to stop a random brute force effort if they got a hold of the file for local processing.
Using something with some front end protection like bit warden with 2FA or keepass with the key file option added in makes it more a task of going after the crypto itself which is a much much harder approach.
Roku enabled by chance? I have 2 of them plugged in on my IOT space and have 54K blocks to scribe.logs.roku.com in the past 30 days.
I’ve got a container set up of this. Drops the output on the NAS and can be accessed from any box on the local net. The only issue is it has a tendency to need the container recycled every so often, like it just gets board sitting there and quits. 🤔
Possible, but personally if I was running a public WiFi it’d have outbound port restrictions and app checking to avoid liability for just such a reason… So in theory no they couldn’t tie it to you with certainty unless it was one of those things like ‘use your library ID to log into the captive portal’, but a decent admin wouldn’t allow it.
Content aside, it’s odd to see a title with ‘Exclusive’ on a platform freely federating things between a bunch of independent nodes.