Love or hate just please explain why. This isn’t my area of expertise so I’d love to hear your opinions, especially if you’re particularly well versed or involved. If you have any literature, studies or websites let me know.
Love or hate just please explain why. This isn’t my area of expertise so I’d love to hear your opinions, especially if you’re particularly well versed or involved. If you have any literature, studies or websites let me know.
More capable than the crowd here lets on. My take is like this, unchecked capitalism is a danger to mankind. The pervasiveness of LLM’s right now is just a symptom of that. The rich are the problem, not the AI.
It is a tool; a very good one along many axes. I think people that think it isn’t good for writing code are misinformed or intentionally disingenuous. It is extremely good at that, but it is just a tool not a replacement.
But it is the applications in pure maths, virology, protein folding, etc. where it gets really interesting.
Water consumption, power consumption, and profit motives aside, they are fascinating tools.
That said, If Anyone Builds It, Everyone Dies is a fascinating take on how this could all go wrong.
In any case, I can’t understand the people that say stuff like, “It is just autocomplete on steroids,” or “it is just a probabilistic prediction tool.” Okay, but like… that’s all we are too.
Summary, interesting tools being used for profit at the expense of economies, the environment, and creative fields.
Whoever told you that was lying to you or misinformed. Neuroscientists do not look at the brain as a probabilistic prediction tool. You are not a database with weights, you’re a human being with experiences, emotions, and thoughts.
We are nearly precisely that. The brain functions as a massive, self-organizing neural network where cognitive architecture is determined by the strength of connections (the biological equivalent of adjustable computational weights) that modulate signal transmission via the flow of ions.
Every decision made or breath taken is the outcome of how ions flow through this network.
Let me know when you find a neurologist that says brains are just like LLMs.
That isn’t likely to happen. Fortunately, neither have I said that. But a pithy comeback won’t change the accuracy of the brain being a self-assembling probabilistic network. All your memories, experiences, and emotions are part of that.
Rewording a description of what an LLM is and saying brains are just like that is still saying that brains work like LLMs, even if you didn’t use those exact words. The acknowledgment that neurologists do not find evidence to support that is pretty much all that is necessary to tear that down, no matter how many times you repeat it.
If I say “A screwdriver is a tool,” and “The brain is a tool,” am I then saying “The brain is just like a screwdriver”? Or is it possible that applying seconding order logic to an admittedly and clearly reductive statement I made isn’t productive?
And which part of the brain description is inaccurate, specifically?
pithy hottakes is 90% of ai criticism
They literally can’t do pure math. Like everyone knows how bad they are at even simple math. We have had tools that do pure math for thousands of years, and we call them calculators. A hotbox for an imaginative mathematician? Sure, but any conclusions drawn get drawn elsewhere with more traditional tools.
I hear this criticism of LLMs all the time and I just don’t get it. They’re language models, they take language inputs and produce language outputs. They aren’t designed to do math. It’s like complaining that a reciprocating saw can’t do math.
Wouldn’t bear repeating if so many people didn’t think it has calculator functionality. Maybe if the people who designed them were honest about what they have made rather than trying to sell it to investors as AGI.
Maybe this is reflective of my media bubble, but I’ve never encountered someone claiming that LLMs should be used as calculators. Most of the advertising I’ve seen (not much) is mostly centered around natural language search and image recognition. I only really hear about them being bad at math from detractors, and I think it misses the mark of why AI companies are dangerous. The problem with LLMs is not that they’re bad at math, or even that they get non-math answers wrong sometimes. The problem is when they’re controlled by humans with a political axe to grind, who deliberately wish to obscure or distort the information their users can access, c.f. Grok.
There is active research right now for their use in pure maths. I don’t think it is primarily about direct solutions, but in program synthesis for formal logic. Keep in mind this isn’t just LLM’s, but also graph networks and other non-transformer networks.