Briar is its own thing.
Living 20 minutes into the future. Eccentric weirdo. Virtual Adept. Time traveler. Thelemite. Technomage. Hacker on main. APT 3319. Not human. 30% software and implants. H+ - 0.4 on the Berram-7 scale. Furry adjacent. Pan/poly. Burnout.
I try to post as sincerely as possible.
Briar is its own thing.
Except they don’t, because insects and corpses are animals too.
I get the point you’re trying to make but it falls flat if you peek in on that part of the world once in a while.
It’s the internet.
Probably. Somebody’s got to get that promotion by launching something before the next round of layoffs.
The personalized data model will be trained on your voice. That means that it’s going to be trained on a great deal of patient medical history data (including PII). That means it’s covered by HIPAA.
I strongly doubt the service in question meets even the most minimal of requirements.
I think it’s interesting that limited AI technology has made it to street level. There was talk of keeping it entirely in-house as a “secret sauce” for competitive advantage (I used to work for one of the companies that was working on large-scale practical LLM), so when OpenAI started gaining notice it raised an eyebrow.
Security-wise it’s a pretty big step backward, because the code it hashes together tends to have older vulns in it. It’s not like secure software development practices are commonly employed right now anyway. I’m not sure when that’s going to become a huge problem, but it’s just a matter of time.
One privacy compromising problem has already been stumbled over (ChatGPT could be tricked into dumping its memory buffers containing other conversations into a chat session) and there will undoubtedly be more in the future. This also has implications for business uses (because folks are already putting sensitive client information into chats with LLMs, which means it’s going to leak eventually).
I really hope that entirely self-hosted LLMs become common and easy to deploy. If nothing else, they’re great for analyzing and finding stuff in your personal data that other forms of search aren’t well suited for. Then again, I hoard data so maybe I’m projecting a little here.
As for my job, I’m of two minds about it. LLMs can already be used for generating boilerplate for scripts, Terraform plans, and things like that (but then again, keeping a code repo of your own boilerplate files is a thing, or at least it used to be). It might be useful for rubber ducking problems (see also, privacy compromise).
It wouldn’t surprise me if LLMs become a big reason for layoffs, if they’re not already. LLMs don’t have to be paid, don’t have tax overhead, don’t get sick, don’t go BOFH, and don’t unionize. The problem with automating yourself out of a job is that you no longer have a job, after all. So I think it’s essential for mighty nerds to invest the time into learning a trade or two just in case (I definitely am - companies might be shooting themselves in the foot by laying off their sysadmins, but if it means bigger profits for shareholders they’ve demonstrated that they’re more than happy to do so).
Same as it always did.
I can’t paywall break them, either. Oh, well.
“Worst of the worst” means that nothing of value was lost.
Could’ve been the journo. Could’ve been one of the editors.
No, not at all.
That sounds like some often parroted but never actually a thing job hunting advice from the late 90’s.
Destination port 123/udp isn’t Tor. That’s NTP.
Something seems fucky. Check this out: https://www.metastem.su/
So, how much to buy a couple of lobbyists to get this ball rolling?
Where even the mosquitos have health bars.
Not quite. A useful tool, though.
That’s still too far.
Killed my brand-new phone’s power cell in about six hours. Kind of impressive, really.