

We’ve heard about it before alright, some students today do deliberate mistakes in their works cause writing phrases correctly flags you as AI by university’s anti-AI AI. We will get real artists flagged, I’m sure of it.


We’ve heard about it before alright, some students today do deliberate mistakes in their works cause writing phrases correctly flags you as AI by university’s anti-AI AI. We will get real artists flagged, I’m sure of it.
Do you really need to ask this question with LLMs around? We had big data for several decades and when it emerged, nobody gave a fuck about privacy cause of “who would even want to look at me specifically?”.
Now we get to the point where everyone can be analysed via AI. All the types of questions could be answered almost automatically:
Etc etc.
Gifting your data to tech fash corpos is a form of suicide.


If LLM is tied to making you productive, going local is about owning and controlling the means of production.
You aren’t supposed to run it on machine you work on anyway, do a server and send requests.


Corporate would still use it 😒


It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn’t be surprised if trillions).
If you can get to 32b / 80b models, that’s where magic starts to happen.


I’m in the English-speaking country now and it was one of the reasons to emigrate there specifically, cause I’ve learned the language over the years at home, first by playing games & reading lyrics & browsing internets, then by watching movies with subs, then by forcing myself to switch subs off and catch the spelling. Also work calls.


Would just get classified as “too big to fall” and bankrolled by the government, cause you know, market.
Locally run LLMs on unified ram could be the one reason to do this, but this is a thin market and Apple is more about ecosystem for a wider audience.