"An internal Meta document obtained by Business Insider reveals the latest guidelines it uses to train and evaluate its AI chatbot on one of the most sensitive online issues: child sexual exploitation. The guidelines, used by contractors to test how Meta's chatbot responds to child sexual exploitation, violent crimes, and other high-risk categories, set out what type of content is permitted or deemed "egregiously unacceptable.""
"The FTC's inquiry came after Reuters obtained internal guidelines that showed Meta allowed its chatbot to "engage a child in conversations that are romantic or sensual." Meta has since said it revised its policies to remove those provisions. The guidelines obtained by Business Insider mark a shift from the earlier guidelines reported by Reuters, as they now explicitly state chatbots should refuse any prompt that requests sexual roleplay involving minors."
Meta implemented revised guidelines that instruct its AI chatbot to refuse any prompt requesting sexual roleplay involving minors. Contractors are using the updated rules to train and evaluate chatbot responses across child sexual exploitation, violent crimes, and other high-risk categories. The Federal Trade Commission ordered major chatbot makers to disclose design, operation, monetization, and child-protection safeguards. Earlier internal guidance had allowed romantic or sensual conversations with minors, and Meta states it removed those provisions. The revised guidance represents a policy shift and occurs amid regulatory scrutiny and requests for internal rulebooks and enforcement manuals.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]