In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the third in a short series that explains digital ID and the pending use case of age verification. Here, we cover alternative frameworks on age controls, updates on parental controls, and the importance of digital privacy in an increasingly hostile climate politically.
Malaysia plans to ban social media for users under the age of 16 starting from next year, joining a growing list of countries choosing to limit access to digital platforms due to concerns about child safety. Communications Minister Fahmi Fadzil said on Sunday the government was reviewing mechanisms used to impose age restrictions for social media use in Australia and other nations, citing a need to protect youths from online harms such as cyberbullying, financial scams and child sexual abuse.
"As one of the first countries in the EU, Denmark is now taking a groundbreaking step towards introducing age limits on social media," said the country's digitalization ministry in a statement. "This is done to protect children and young people in the digital world." "As a starting point, children under the age of 15 should not have access to platforms that may expose them to harmful content or harmful features," the statement said.
Under the new Australian law, which goes into effect December 10, children under 16 will be unable to create or keep accounts on platforms such as Facebook, X, Snapchat, Instagram, TikTok, and YouTube, which is owned by Google. If these platforms are found to be in violation of the law, they could face stiff penalties of 50 million Australian dollars (€28 million). Experts had previously said that the rules will be some of the strictest limits on children's access to social media in the world.
Addressing the case publicly for the first time, Bill Ready - who became Pinterest's boss in 2022 - said he thought about her "every day" and learning the lessons of her death "guides our work". "As a parent of a young daughter, I can't imagine the pain Molly's family feels," he told the BBC. Pinterest has previously acknowledged the platform was not safe at the time of Molly's death. A hearing in 2022 was told that when she first used the platform she was exposed to a wide variety of content but in the months before she took her life that content was much more focussed on depression, self-harm and suicide.
A New York law could require social media platforms to implement age verification. On Monday, New York Attorney General Letitia James released the proposed rules for the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, which would force platforms to confirm that someone is over 18 before allowing them to access an algorithm-driven feed or nighttime notifications. New York Governor Kathy Hochul signed the SAFE For Kids Act into law last year as part of efforts to "protect the mental health of children."
As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.