{"id":216711,"date":"2023-11-19T17:52:37","date_gmt":"2023-11-19T17:52:37","guid":{"rendered":"https:\/\/bestwnews.com\/?p=216711"},"modified":"2023-11-19T17:52:37","modified_gmt":"2023-11-19T17:52:37","slug":"australia-launches-world-first-crackdown-on-deepfake-porn","status":"publish","type":"post","link":"https:\/\/bestwnews.com\/technology\/australia-launches-world-first-crackdown-on-deepfake-porn\/","title":{"rendered":"Australia launches world-first crackdown on \u2018deepfake\u2019 porn"},"content":{"rendered":"
Add articles to your saved list and come back to them any time.<\/p>\n
Tech giants including Apple, Google and Meta will be forced to do more to tackle online child sexual abuse material and pro-terror content, including \u201cdeepfake\u201d child pornography created using generative AI, in world-first industry standards laid out by Australia\u2019s eSafety Commissioner.<\/p>\n
Following more than two years of work, and after rejecting draft codes created by the tech industry, eSafety Commissioner Julie Inman Grant will release draft standards on Monday covering cloud-based storage services like Apple iCloud, Google Drive and Microsoft OneDrive, as well as messaging services like WhatsApp, requiring them to do more to rid their services of unlawful content.<\/p>\n
<\/p>\n
Australia\u2019s eSafety Commissioner Julie Inman-Grant.<\/span>Credit: <\/span>Rhett Wyman<\/cite><\/p>\n Inman Grant, a former Twitter executive, said that she hopes Australia\u2019s industry standards would be the \u201cfirst domino\u201d of similar regulations globally to help tackle harmful content.<\/p>\n She said the requirements would not force tech companies to break their own end-to-end encryption, which is turned on by default on some services, including WhatsApp.<\/p>\n All major tech platforms have policies that ban child sex abuse material from their public services, but Inman Grant said they have not done enough to police their own platforms.<\/p>\n \u201cWe understand issues around technical feasibility, and we\u2019re not asking them to do anything that is technically infeasible.\u201d<\/p>\n \u201cBut we\u2019re also saying that you\u2019re not absolved of the moral and legal responsibility to just turn off the lights or shut the door and pretend this horrific abuse isn\u2019t happening on your platforms.<\/p>\n \u201cWhat we\u2019ve found working with WhatsApp, it\u2019s an end-to-end encrypted service, but they pick up on a range of behavioural signals that they\u2019ve developed over time, and they can scan non-encrypted parts of the services, including profile and group chat names, and things like cheese pizza emojis, which is known to stand for child pornography.\u201d<\/p>\n \u201cThese and other interventions enable WhatsApp to make 1.3 million reports of child sexual exploitation and abuse each year,\u201d she added.<\/p>\n The standards will also cover child sexual abuse material and terrorist propaganda created using open-source software and generative AI. A growing number of Australian students for example are creating so-called \u201cdeepfake porn\u201d of their classmates and sharing it in classrooms.<\/p>\n \u201cWe\u2019re seeing synthetic child sexual abuse material being reported through our hotlines, and that\u2019s particularly concerning to our colleagues in law enforcement, because they spend a lot of time doing victim identification so that they can actually save children who are being abused,\u201d she said.<\/p>\n \u201cI think the regulatory scrutiny has to be at the design phase. If we\u2019re not building in and testing the efficacy and robustness of these guardrails at the design phase, once they\u2019re out in the wild, and they\u2019re replicating, then we\u2019re just playing probably an endless and somewhat hopeless game of whack-a-mole.\u201d<\/p>\n Inman Grant\u2019s office has commenced public consultation on the draft standards, a process that will run for 31 days. She said the final versions of the standards will be tabled in federal parliament and come into effect six months after they\u2019re registered.<\/p>\n \u201cThe standards also require these companies to have sufficient trust and safety, resourcing and personnel. You can\u2019t do content moderation if you\u2019re not investing in those personnel, policies, processes and technologies,\u201d she said.<\/p>\n <\/p>\n Elon Musk, chief executive officer of X, which has refused to pay a $610,500 fine from the eSafety Commissioner for allegedly failing to adequately tackle child exploitation material on its platform.<\/span>Credit: <\/span>Bloomberg<\/cite><\/p>\n \u201cAnd you can\u2019t have your cake and eat it too. And what I mean by that is, if you\u2019re not scanning for child sexual abuse, but then you provide no way for the public to report to you when they come across it on your services, then you are effectively turning a blind eye to live crime scenes happening on your platform.\u201d<\/p>\n The introduction of the standards comes after social media giant X \u2013 formerly known as Twitter \u2013 refused to pay a $610,500 fine from the eSafety Commissioner for allegedly failing to adequately tackle child exploitation material on its platform.<\/p>\n X has filed an application for a judicial review in the Federal Court.<\/p>\n \u201ceSafety continues to consider its options in relation to X Corp\u2019s non-compliance with the reporting notice but cannot comment on legal proceedings,\u201d a spokesman for the commissioner said.<\/p>\n Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday.<\/i><\/b> Sign up here.<\/p>\nMost Viewed in Technology<\/h2>\n
From our partners<\/h3>\n