Lxc and docker are not equivalent. They are system and software containers respectively.
Lxc and docker are not equivalent. They are system and software containers respectively.
Yeah, those are both good examples of interdependent supply between us and Canada, where it makes sense to keep markets geographically together.
I’m more referring to artificial blocks in terms of provincial barriers put in place for political reasons, like my alcohol example. Historically, these were restrictions for tax reasons between Ontario, Quebec and western Canada, but recent (last 20 years) spats and competition for transfer payments have essentially cut off lumber, paint, car parts, raw minerals, etc between provinces as close as sask and Manitoba.
And that is on top of intangible services gradually being restricted more and more between provinces. As a remote worker and contractor, rules have tightened for me about working in multiple provinces simultaneously.
These measures aren’t there to balance economics, they exist because provinces compete more than they cooperate.
No, there are significant regulatory barriers between provinces.
Alberta has hard limits on how much BC wine and produce can be sold in stores. Consequently, its cheaper to get Australian wine in Calgary than from the okanagan valley. Same with fruit, but from the USA.
So I went to the demo and I have a few questions:
haystack-mountain-101522-105940.gpx
{"message":"TypeError: Cannot read properties of null (reading 'id')"}
I am actually really impressed with what you have so far, and I’d love to start using this!
Ok, I think I can deal with recording on an OSM client. I’ll give Wanderer a try.
I want to try this. I’m one of the unfortunate victims of Gaia GPS turning to trash.
However, I can’t seem to find in the docs how tracks can be recorded…
Is there an app?
Do I need to be in contact with the server to record a track?
Do I need to ask my friends to send me gpx exports if they aren’t on strava?
Do you envision an integration with opentrailmap so in can share trails without having to expose Wanderer to public?
OVS is fine, you can make live changes and something like spanning port traffic is a bit less hassle than using tc, but beyond that, it’s not really an important component to a failover scenario over any other vswitch, since it has no idea what a TCP stream is.
For sure, if your thing is leaning into network configs, nothing wrong with it, especially if you have proper failover set up.
I think virtualized routing looks fun to the learning homelabber, and it is, but it does come with some caveats.
HA… Do you mean failover? It would need some consideration, either a second wan link or accepting that a few TCP sessions might reset after the cutover, even with state sync. But it’s definitely doable.
I’m currently in a state of ramping down my hardware from a 1u dual Xeon to a more appropriate solution on less power-hungry gear, so I’m not as interested in setting up failover if it means adding to my power consumption simply for the uptime. After 25 years in IT, its become clear to me that the solutions we put in place at work come with some downsides like power consumption, noise, complexity and cost that aren’t offset by any meaningful advantage.
All that said, i did run that setup for a few years and it does perform very well. The one advantage of having a router virtualized was being able to revert to a snapshot if an upgrade failed, which is a good case for virtualizing a router on its own.
I did it for a few years, it looks interesting on paper, but in practice, it’s a nightmare.
At home, you’ll be getting real sick of asking for change windows to reboot your hypervisor.
At work, you will rue the day you convinced mgmt to let it happen, only to now have hypervisor weirdness to troubleshoot on top of chasing down bgp and TCP header issues. If it’s a dedicated router, you can at least narrow the scope of possible problems.
How efficient is using a GPU? I understood the efficiency wasn’t nearly as good, but that may have been info from a while back.
It’s well worth it to get a $50 coral tpu for object detection. Fast inference speed and nearly zero CPU usage.
These projects are poorly maintained and abandoned because the industry of email has been reduced to a very few players, and they don’t care about IMAP standards, dmarc, dkim or any of it.
You’re running head on into the primary reason no one self-hosts email anymore; it has gone from being a nuisance to being adversarial.