

They did? They can’t? I haven’t noticed. 🤷
I am live.


They did? They can’t? I haven’t noticed. 🤷


I consider this man to be the worst individual in all of human history. Maybe the single worst human to ever live on the face of the planet.
What makes him this bad? He knew. He knew what leaded gasoline was doing and what freon would do and he did it anyway to help cooperations make a little more money.
There are some pretty bad people under the post. But this guy… Man.


I think the better question and the one that would get more yes or no answers is do you have any screens in your bedroom? Not necessarily televisions.
I think significantly more people watch their phones in bed than they do large TVs.


You sound like a flat-earther insisting you know the truth when the evidence clearly contradicts you.
I don’t need to prove anything to you, I can rely on verifiable facts.
The Linux kernel used in Android has been significantly modified to meet Google’s requirements.
Android does not behave or function like a conventional Linux distribution.
Android is fundamentally different from other operating systems, aside from portions of the kernel that remain unchanged.
Android is “Linux” only at the kernel level, which is insufficient to classify it as Linux in any meaningful, user-facing sense.
What a subset of developers choose to call Linux is irrelevant here, this is a straightforward equivocation fallacy.


No.
I will try and explain.
To use a simple analogy, the Linux kernel is like the engine of a car. A Linux distro is everything else around that engine. You can take the same engine and place it into many different shells. While the engine remains the same, the surrounding components can vary wildly.
That’s why there are dozens, if not hundreds, of different Linux distros.
A company like Google can take the Linux kernel and build an operating system like Android around it, resulting in the fragmented mess it is today.
However, saying that Android is Linux is an oversimplification. It is more accurate to say that Android is built on the Linux kernel, not that it is Linux in the same sense as a traditional GNU/Linux distribution.


I don’t think you actually read what I wrote before responding. What does the kernel have to do with the point I’m making about Linux kernel, Linux, or Android?
Yes, Android uses the Linux kernel. That’s not the argument. The kernel by itself is not the operating system people are referring to when they say they “use Linux.”
Android is not a traditional Linux system, and more importantly, it is not some bastion of open-source purity. It is developed and controlled by Google, with most real-world functionality tied to its proprietary ecosystem.
So bringing up the kernel doesn’t actually address what I said.


Oh please, that’s such a lazy “gotcha.”
Yes, Android uses the Linux kernel. Congratulations, you’ve identified the lowest common denominator. That does not mean you’re “using Linux” in any meaningful sense of the word.
When people talk about using Linux, they’re talking about an actual Linux environment, full control, GNU userland, desktop distributions, package management, the whole ecosystem. Not a locked-down mobile OS where everything is sandboxed behind an app store and you interact with it through a touchscreen UI.
By your logic, using Android makes you a Linux power user, which is obviously absurd.
You’re technically correct in the most superficial way possible, but it completely misses the point I was making.


This post and its entire comment section are hilarious because the vast majority of people browse the internet on their phones, usually through Safari or Chrome.
What I find funny is that some people arrogantly and confidently turn their noses up at Windows users for not using Linux, yet they themselves are still using either an iPhone or an Android device


No. That is not what the analogy means. That is what you are choosing to extract from it because it supports the direction you want this exchange to go.
The use of the word “regurgitate” carries a very specific implication. It suggests that LLMs retrieve and repeat stored information verbatim. That is not how they function. We both appear to agree on that point.
LLMs do not rely on stored facts in the way the analogy implies. They generate outputs by modeling patterns in data, producing responses that are often novel rather than retrieved.
Whether or not the model understands or comprehends the content is irrelevant to this distinction. Comprehension is not a requirement for the system to function. So yes, the analogy is overly simplistic and ignores the actual mechanism at work.
To be precise: it does not matter that the model lacks awareness or understanding. It is still capable of analyzing patterns and generating new outputs from its training data. That is not regurgitation.
Concisely as I can: llms do not regurgitate data, the analogy fails.


He is claiming the analogy works, then retreating to a more defensible position by admitting the system is more complex.
I am not being overly simplistic or imprecise. I am stating plainly that the analogy fails. LLMs do not regurgitate stored information. They generate novel outputs by statistically modeling and interpreting patterns in their training data. I supported that position with objective facts, and no one has attempted to directly refute them. Instead, the responses rely on vague arguments about “precision” and “simplicity,” which do not address the core claim.


Someone else in the comments said it perfectly. Al is just data regurgitation. It’s like calling me highly intelligent because I read you a paragraph from Wikipedia. I didn’t know anything. I just read a thing and said it out loud.
Christ on a stick.
The original analogy literally states “AI is just data regurgitation” now you’re what? Saying it’s more complex? Ever heard of a motte and Bailey. Cuz that’s what you’re doing now.
Once again, for the people in the back, the analogy is a failure. It does not work. Llms are not regurgitation machines.


I fully understand the analogy being presented. It is a poor analogy and fundamentally incorrect because that is not how LLMs function. They do not “read back Wikipedia pages,” which is a complete misunderstanding of the technology, not a minor lack of precision.
I am not disputing that it is an analogy, nor am I claiming that exact precision is necessary to analyze it. The point remains: the analogy fails.
What is curious is how people focus on my tone, saying I am aggressive or should be more precise, rather than engaging with the substance of my argument. So far, no one has directly refuted my points. This suggests that many responding are simply following the anti-AI bandwagon without understanding the technology, which is both reductive and disappointing.


The analogy is terrible and is not at all, once again, what llms do.
This is an objective fact I have provided evidence to support this.
How are you saying the analogy is good?


The reason he should learn about it is because he’s talking about it as though he’s informed and he is not.
I don’t have to be a LLM programmer working at openai to have a working knowledge of how these machines function. It’s literally just a Google search.
He made an unreasonable ignorant comment and I called him out. He should feel ashamed and I have absolutely no reason to pad down what I’m saying under the guise of being nice.


Calling an llm a Wikipedia regurgitator is factually and objectively incorrect.
Is there anything that you can say to refute the facts that I presented in my above comment?
(I rolled my eye so hard at your comment that I pulled my back out)


No. You’re not just wrong, you’re aggressively uninformed.
By you repeating the same tired “AI is just regurgitating data” line makes it clear you don’t understand what you’re criticizing. Calling large language models “AI” the way you are doing it just exposes that you do not know what you are talking about. It is like a creationist smugly saying “orangutang” instead of “orangutan” and thinking they sound informed. You are not demonstrating insight. You are advertising ignorance.
What you’re describing, reading a paragraph off Wikipedia, is literal retrieval. That is not how modern language models operate. They are not databases with a search bar attached. They are probabilistic systems trained to model patterns, structure, and relationships across massive datasets. When they generate a response, they are not pulling a stored paragraph. They are constructing output token by token based on learned representations.
If it were just regurgitation, you would constantly see verbatim copies of training data. You do not. What you see instead is synthesis. Concepts are recombined, abstracted, and adapted to context. The system can explain the same idea multiple ways, shift tone, handle novel prompts, and connect ideas that were never explicitly paired in the source material. That is fundamentally different from reading something out loud.
Your analogy fails because it assumes nothing is being transformed. In reality, transformation is the entire mechanism. Information is compressed into weights and then expanded into new outputs.
Is it human intelligence. No. Is it perfect. No. But reducing it to “just reading Wikipedia out loud” is not skepticism. It is a basic failure to understand how the technology works.
If you are going to criticize something, at least learn what it is first.


It really isn’t. But you do you boo.


I know lemmy’s very anti-ai but this is really fascinating stuff.
I use revanced so it’s not going to affect me.