Thursday, July 25, 2024

Top 5 This Week

Related Posts

The Proliferation of Sexual AI Apps: A Simple and Devastating Threat for Victims

The proliferation of sexual AI (artificial intelligence) applications on smartphones has made it easier for perpetrators to commit offenses, according to the eSafety Commissioner. During a recent inquiry hearing on a new sexual deepfakes bill, Commissioner Julie Inman Grant highlighted the availability of apps designed for nefarious purposes in app stores. She specifically mentioned apps that openly promote their ability to modify pictures of girls using AI. These types of apps, which are often free and easy to use, make it simple and cost-free for perpetrators to inflict incalculable devastation on their victims.

One of the main concerns raised by the eSafety Commissioner is the presence of open-source sexual AI apps that use sophisticated monetization tactics and are becoming increasingly popular on mainstream social media platforms, particularly among younger audiences. Referring to a recent study, Inman Grant revealed that there was a 2,408 percent increase in referral links to non-consensual pornographic deepfake websites across Reddit and X (formerly Twitter) in 2023 alone. The rise of multimodal forms of generative AI, such as hyper-realistic synthetic child sexual abuse material created via text prompt to video, highly accurate voice cloning, and manipulated chatbots, further amplifies the risks of grooming, sextortion, and other forms of sexual exploitation of young people on a large scale.

To address these risks, the eSafety Commissioner’s agency has submitted mandatory standards to the parliament to strengthen regulations on sexual AI apps. However, Inman Grant also emphasized that tech companies should bear the burden of reducing the risks on their platforms. She believes that AI companies must do more to ensure that their platforms are not being weaponized to abuse, humiliate, and denigrate children. This includes robustly enforcing terms of service and implementing clear reporting mechanisms. The Commissioner stressed the importance of holding platform libraries accountable for hosting and distributing these apps and the need for strict safety standards.

While authorities are taking action to mitigate the risks associated with AI, Inman Grant acknowledged the significant challenges faced by law enforcement. Deepfake detection tools are lagging behind the freely available tools used to perpetuate deepfakes, and these deepfakes are becoming so realistic that they are difficult to discern with the naked eye. The rapid production and sharing of deepfakes generated by AI overwhelm investigators and support hotlines, making it challenging to report, triage, and analyze the material effectively.

In terms of dealing with sexual abuse materials, the eSafety Commissioner revealed that eSafety often takes an informal approach, despite the availability of formal means. Under current laws, the online content regulator can informally request online service providers to remove illegal or restricted content. The informal pathways are chosen because they are quicker, allowing harmful content to be taken down promptly, providing relief to the victims. Since the introduction of the Online Safety Act 2021, eSafety has issued 10 formal warnings, 13 remedial directions, and 34 removal notices to entities in Australia.

In conclusion, the proliferation of sexual AI apps poses significant risks, making it easier for perpetrators to commit offenses. The eSafety Commissioner has highlighted the need for stronger regulations and accountability for tech companies. However, law enforcement faces challenges in detecting and addressing deepfakes generated by AI. Despite the challenges, the eSafety Commissioner’s agency takes an informal approach to remove illegal content quickly. The introduction of the Online Safety Act 2021 has enabled them to issue warnings, directions, and removal notices to entities in Australia. It is crucial to continue addressing the growing threats posed by sexual AI apps and deepfakes to protect victims and prevent further harm.

Popular Articles