Love or hate just please explain why. This isn’t my area of expertise so I’d love to hear your opinions, especially if you’re particularly well versed or involved. If you have any literature, studies or websites let me know.
Love or hate just please explain why. This isn’t my area of expertise so I’d love to hear your opinions, especially if you’re particularly well versed or involved. If you have any literature, studies or websites let me know.
I have a vague memory that Bitcoin used to be instant in the first versions - or at least with near certainty that the advertised transaction was real, but that the protocol was later modified in such a way that this mechanism was no longer reliable. It might have been enshittified.
AI is still largely affected by garbage in garbage out.
Exactly. When it comes to code, for instance, what percentage of the training data is Knuth, Carmack, and similarly skilled programmers, and what percentage is spaghetti code perpetrated by underpaid and uninterested interns?
Shitty code in the wild massively outweighs properly written code, so by definition an LLM autocomplete engine, which at best can only produce an average of its training model, will only produce shitty code. (Of course, though, average or below average programmers won’t be able — or willing — to recognise it as shitty code, so they’ll feel like it’s saving them time. And above average programmers won’t have a job anymore, so they won’t be able to do anything about it.)
And as more and more code is produced by LLMs the percentage of shitty code in the training data will only get higher, and the shittiness will only get higher, until newly trained LLMs can only produce code too shitty to even compile, and there will be no programmers left to fix it, and civilisation will collapse.
But, hey, at least the line went up for a while and Altman and Huang and their ilk will have made obscene amounts of money they didn’t need, so it’ll have been worth it, I suppose.