• 6 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle









  • Not ‘for android’ but this TTS model is popular https://github.com/coqui-ai/TTS

    This one is a little older but works as well: https://github.com/snakers4/silero-models

    Both of those are AI models only. Most offline AI is runs over the network already, so like I have it on my phone at home, but it requires setup and I’m connecting to my computer to offload the task on my GPU. Personally my phone doesn’t have anywhere near enough RAM to run all of Android’s (zygote) bloat even on GrapheneOS and any models I would want to run.

    I don’t think we are at the point where mobile devices have the hardware specs needed for this to happen natively yet. Maybe it will happen soon though.

    That’s just what I know, but it is like water cooler talk and not primary source authority by any stretch.


  • That wouldn’t do a whole lot in practice for things like phones. Having root access is not the actual hurtle. The hardware itself is usually undocumented and the kernel is not mainline merged so the community can’t actually support the device in a meaningful way.

    The only kernel that supports the device is the ancient orphaned forked kernel that ships from the manufacturer. This is what Android really is and why you’ll often hear people say it is not real Linux. In truth it is its own thing with a stripped down Linux kernel underpinning it. Google puts together a stripped down Linux kernel for devices that is specifically setup for hardware manufacturers to add the hardware support binaries at the last possible minute. You would need hardware documentation for the chipset and the source code for these binaries in order for the community to support the hardware in the mainline kernel.

    These hardware manufactures are too embarrassed to share their terrible code, and too worried about getting caught for all of the IP they have stolen to build their hardware. Their criminality comes with the added benefit of theft of end consumer ownership through planned deprecation.

    I didn’t word my reply so direct or strongly, but it is the glass half empty truth.





  • I’m not upset because I think it is totally irrelevant because training AI is not reproducing any works and it is no different than a person who reads or sees said works talking about or creating in the style of said works.

    At the core, this amounts to thought policing as the final distilled issue if this is given legal precedent. It would be a massive regression of fundamental human rights with terrible long term implications. This is no different than how allowing companies to own your data and manipulate you has directly lead to a massive regression of human rights over the last 25 years. Reacting like foolish luddites to a massive change that seems novel in the moment will have far reaching consequences most people lack the fundamental logic skills to put together in their minds.

    In practice, offline AI is like having most of the knowledge of the internet readily available for your own private use in a way that is custom tailored to each individual. I’m actually running large models on my own computer daily. This is not hypothetical, or hyperbole; this is empirical.








  • I don’t follow AI image generation for the art really. I’m looking for things that I do not think I could recreate myself. Things like the electric slide image simply resonate with how I feel about dancing; I’d rather do anything else. Styles that may resonate elsewhere or in a traditional style are far less appealing to me. If it looks like anything I could get at random, I am less interested too. I’m not very emotionally invested though.