Beep@lemmus.org to Technology@lemmy.worldEnglish · edit-21 month agoHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgexternal-linkmessage-square9fedilinkarrow-up175arrow-down114file-text
arrow-up161arrow-down1external-linkHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgBeep@lemmus.org to Technology@lemmy.worldEnglish · edit-21 month agomessage-square9fedilinkfile-text
minus-squarelIlIlIlIlIlIl@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down4·1 month agoHallucinated? From researched and documented code spelunking?
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up3arrow-down1·1 month agoThat’s…exactly my point though…
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up4·1 month agoThat even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
minus-squarelIlIlIlIlIlIl@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·1 month agoI guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up5arrow-down1·1 month agoDo you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.
Hallucinated? From researched and documented code spelunking?
That’s…exactly my point though…
What is?
That even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
I guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
Do you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.