But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
In Wikipedias case, you just fail to make an edit/new post. So you can verify if Ai can make a usable post up to standard with people who can verify but not make, hopefully saving enough time and bulk to help that group learn to make properly, as well as leave the ones Ai will fuck up to people who can do it right.
But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
In Wikipedias case, you just fail to make an edit/new post. So you can verify if Ai can make a usable post up to standard with people who can verify but not make, hopefully saving enough time and bulk to help that group learn to make properly, as well as leave the ones Ai will fuck up to people who can do it right.