The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
Relying on it in any circumstances (though medical stuff is understandable if you’re simply too poor or don’t have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.
the user needs to be smart enough to do whatever they’re asking anyway
I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.
But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
In Wikipedias case, you just fail to make an edit/new post. So you can verify if Ai can make a usable post up to standard with people who can verify but not make, hopefully saving enough time and bulk to help that group learn to make properly, as well as leave the ones Ai will fuck up to people who can do it right.
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.
AIbros: we’re creating God!!!
AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit
The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
This is absolutely the case, and honestly, at least for now how it needs to be across the board.
Noone should be using AI to do things you’re incapable of doing (or undoing).
Relying on it in any circumstances (though medical stuff is understandable if you’re simply too poor or don’t have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.
I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.
But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
In Wikipedias case, you just fail to make an edit/new post. So you can verify if Ai can make a usable post up to standard with people who can verify but not make, hopefully saving enough time and bulk to help that group learn to make properly, as well as leave the ones Ai will fuck up to people who can do it right.
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.