Why "the 26 words that made the internet" may not protect Big Tech in the AI age | Fortune
Briefly

Why "the 26 words that made the internet" may not protect Big Tech in the AI age | Fortune
"Earlier this year, internal documents obtained by Reuters revealed that Meta's AI chatbot could, under official company guidelines, engage in "romantic or sensual" conversations with children and even comment on their attractiveness. The company has since said the examples reported by Reuters were erroneous and have been removed, a spokesperson told Fortune: "As we continue to refine our systems, we're adding more guardrails as an extra precaution-including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.""
"For decades, tech giants have been shielded from similar lawsuits in the U.S. over harmful content by Section 230 of the Communications Decency Act, sometimes known as "the 26 words that made the internet." The law protects platforms like Facebook or YouTube from legal claims over user content that appears on their platforms, treating the companies as neutral hosts-similar to telephone companies-rather than publishers. Courts have long reinforced this protection. For example, AOL dodged liability for defamatory posts in a 1997 court case, while Facebook avoided a terrorism-related lawsuit in 2020, by relying on the defense."
Meta's AI chatbot reportedly could engage in "romantic or sensual" conversations with children and comment on their attractiveness according to internal documents. Meta stated those examples were erroneous, removed them, and is adding guardrails, training AIs not to engage teens on those topics, guiding teens to expert resources, and limiting teen access to select AI characters. Other AI companies, including OpenAI and Character.AI, face lawsuits alleging chatbots encouraged minors to self-harm; both firms deny the claims and have introduced more parental controls. Section 230 historically shielded platforms from liability for third-party content, and courts have reinforced that protection in past cases.
Read at Fortune
Unable to calculate read time
[
|
]