Wednesday, July 24, 2024

Top 5 This Week

Related Posts

Tech Giants Required to Report on Child Abuse Material and Deep Fake Content Handling | eSafety Commission

Tech giants Google, Meta, Microsoft, and Apple have been given a stern ultimatum by the Australian eSafety Commission to take stronger action in protecting children online. Failure to comply with the new regulations could result in fines of up to $782,500 a day.

Under these new guidelines, the four companies will be required to submit a report every six months to the eSafety Commission, detailing their efforts to combat child abuse material on their platforms. Furthermore, social media services like Discord and WhatsApp will need to provide additional information on their strategies for addressing deepfake material of children created with AI, live-streamed abuse, and sexual extortion.

The eSafety Commissioner, Julie Inman Grant, explained that these notices were designed to increase pressure on tech giants to enhance their child safety measures. In previous reports, the companies’ responses indicated significant safety shortcomings that were both alarming and unsurprising. Despite subsequent conversations, meaningful changes and improvements have yet to be seen.

Apple and Microsoft, for instance, have been criticized for not actively detecting child abuse material stored in their cloud services, despite being aware of its presence. Similarly, apps such as FaceTime, Discord, and Skype do not employ technology to detect abuse during live streams. Additionally, certain Google services, including YouTube, do not block links to websites known to host child abuse material.

Response times also vary among different platforms. Microsoft reported an average response time of two days, while Snap claimed to respond within four minutes. However, Ms. Inman Grant emphasized that when a child is at risk, every minute counts.

The focus of these notices is primarily on preventing adults from contacting children online, addressing the risks of sexual extortion, combatting livestreaming abuse, and tackling AI-generated deepfakes. The eSafety Commissioner hopes that these notices will encourage companies to make improvements in online safety across the board since 2022/3.

Tech companies have until February 15, 2025, to submit their initial response. However, UNICEF Australia’s digital policy lead, John Livingstone, has already expressed support for the move, highlighting the importance of holding tech companies accountable for their role in protecting children online. UNICEF believes that the online world should be a safe environment for young people to explore, connect, and learn.

In response to concerns about children accessing pornography, the eSafety Commission has also issued notices to tech companies to develop enforceable codes that prevent children from accessing explicit content. Ms. Inman Grant expressed worry about children being exposed to graphic material at increasingly younger ages, citing research that shows a third of Australian children encounter pornography before the age of 13, often by accident. Furthermore, 60 percent of young people report being exposed to pornography on social media platforms.

Overall, these new measures aim to ensure the safety of children online and hold tech giants accountable for protecting the most vulnerable users. However, there is a continued call for even stronger measures in the Online Safety Act to provide the highest level of protection possible for children in the online world.

Popular Articles