Love or hate just please explain why. This isn’t my area of expertise so I’d love to hear your opinions, especially if you’re particularly well versed or involved. If you have any literature, studies or websites let me know.
Love or hate just please explain why. This isn’t my area of expertise so I’d love to hear your opinions, especially if you’re particularly well versed or involved. If you have any literature, studies or websites let me know.
They just don’t do anything useful, and the hype-ers are acting like they’re AGI. Hallucinations make them too unreliable to be trusted with “real work”, which makes them useless for anything beyond a passing gimmick. Vibe coded software is invariably shit. Doing any serious task with “AI assistance” ends up either taking more work than doing it without LLMs or sacrificing quality or correctness in huge ways. Any time you point this out to hype-ers, they start talking about “as AI advances” as if it’s a foregone conclusion that they will. People talked the same way about blockchain, and the only “advancements” that have been made in that sphere are more grifts, and meanwhile it still takes anywhere between 10 minutes and an hour to buy a hamburger with Bitcoin, and it gets worse with greater adoption. Just like you can’t make a distributed blockchain cryptocurrency that resolves discrepancies automatically without relying on humans fast at scale (and even if you could make it fast, it’d introduce at least as many problems as it purports to “solve”), you can’t make LLMs not hallucinate. The only way to solve hallucinations is by abandoning LLMs in favor of a whole different algorithm.
If anything LLMs have blocked us from making progress toward AGI by distracting us with gimmicky bullshit and taking resources from other efforts which may otherwise have pushed us in the right direction.
Mind you, “AI” is a very old term that can mean a lot of different things. I took a class in college called “Introduction to Artificial Intelligence” in… maybe 2006 or 2007. And in that class, I learned about the A* algorithm. Every time you played an escort mission in Skyrim and had an NPC following you, it was the A* algorithm or some slight variation on it that was used to make sure that NPC could traverse terrain to keep roughly in toe with you despite obstacles of various sorts. It’s absolutely nothing like LLMs. It doesn’t need to be trained. The algorithm fully works the moment it’s implemented. If you want to know why it made a particular decision, you can trace the logic and determine exactly why it did what it did, unlike LLMs. It’s for a few very niche purposes rather than trying to be general purpose like an LLM. It requires no massive data centers and doesn’t consume massive amounts of memory. And it doesn’t hallucinate. The AI hype-ers (and the media who have mostly fallen for their grift hook, line, and sinker) love to conflate completely unrelated technologies to give the impression that LLMs are getting better because such-and-such article mentions an “AI” that discovered a groundbreaking new drug. But the kind of AI they use to find drugs is very special purpose and has nothing to do with how LLMs work.
LLMs can’t do your job, but the grifters are doing a damned good job of convincing your boss that LLMs can in fact do your job. As Cory Doctorow says, the current AI craze “is the asbestos that we’re shoveling into our walls”. We’re causing huge problems with it and if/when the bubble properly pops, we’re going to spend a long time painstakingly extracting it from our systems, replacing it with… you know… stuff that actually works, and repairing the damage it’s done in the meantime.
Meanwhile, it’s Nvidia and OpenAI and so on who are boosting the LLM bubble. And they’ve made a shit ton of money off of their grift at the expense of everyone else. How anyone can look at all this and not think “scam” is beyond me.
I have a vague memory that Bitcoin used to be instant in the first versions - or at least with near certainty that the advertised transaction was real, but that the protocol was later modified in such a way that this mechanism was no longer reliable. It might have been enshittified.
AI is still largely affected by garbage in garbage out.
Exactly. When it comes to code, for instance, what percentage of the training data is Knuth, Carmack, and similarly skilled programmers, and what percentage is spaghetti code perpetrated by underpaid and uninterested interns?
Shitty code in the wild massively outweighs properly written code, so by definition an LLM autocomplete engine, which at best can only produce an average of its training model, will only produce shitty code. (Of course, though, average or below average programmers won’t be able — or willing — to recognise it as shitty code, so they’ll feel like it’s saving them time. And above average programmers won’t have a job anymore, so they won’t be able to do anything about it.)
And as more and more code is produced by LLMs the percentage of shitty code in the training data will only get higher, and the shittiness will only get higher, until newly trained LLMs can only produce code too shitty to even compile, and there will be no programmers left to fix it, and civilisation will collapse.
But, hey, at least the line went up for a while and Altman and Huang and their ilk will have made obscene amounts of money they didn’t need, so it’ll have been worth it, I suppose.