

You sign NDAs with your testers and prototypers for that.


You sign NDAs with your testers and prototypers for that.


The problem is false negatives. Positive reports would still be reviewed before treatment.
AI already has less false negatives than humans. Both together is optimal but at some point you need to prioritize. A doctor looking at scans could in stead be treating a patient.


This is not about genAI. It’s cheap old technology. It’s been better than humans for a while now. The problem is legal risks.


Population is a statistical term which means “everything”. There is no “next 100”.
The 300 number is specifically about very big populations where you’re trying to measure something like an average of an unknown variable. It doesn’t apply to just anything statistics.


How big do you think the population of people with AI delusion is? Why can’t 19 be a representative sample? Why is that not enough to make statements like “after the user expresses romantic interest in the chatbot, the chatbot is 7.4x more likely to express romantic interest in the next three messages, and 3.9x more likely to claim or imply sentience in the next three messages.” when all 19 users expressed romantic interest?


That doesn’t make sense. What if your population is only 100?
They’re compiled rust programs.
You can fork today. It’ll work forever.


They were not competing before? Astral’s product is private package repo’s. Maybe that will get abandoned but the FOSS projects are safe.


How are they competition? Astral’s tools solve the same problems for humans and AI.


Maybe to turn a refurbed pixel into a hole server?
Uhu, and if it was still as obvious as in 2023 they could have made a filter by now… Which is why I called hindsight bias. But AI improved with being more convincing, that’s the actual problem, not volume. Imagine if AI actually got more correct, they would also have a higher volume of reports. Maybe not that much but ones they’d actually have to spend time to fix.
I encourage you to read some threads linked at the bottom of the article. The AI spammers have become way less obvious, one even has video. The team still checks every issue.
Hindsight bias. This is from 2023. It’s obvious now. If it still was this easy to spot they wouldn’t have closed the bug bounty program.


You need to learn how to use a microwave. It’s way more versatile than people think. And more of a steamer than an oven. https://cookanyday.com/collections/recipes
(This is not an analogy about LLMs anymore.)


Gpt3 completions would contain spelling errors if the prompt had errors, as if it was mocking you lol


No the poorly redacted ones were real, from the first release. They were not redacted by the DOJ, they were old court documents. Afaik all of them were also already released unredacted.


Not working for me, is my country still getting old school translation models? Is it already fixed?
Yeah ok but you’re not scrolling your wall for hours a day, right?