Now, children using iMessage platform can report nudity to Apple; feature rolls out in Australia

Edited By: Vinod Janardhanan WION Web Team
Sydney Updated: Oct 24, 2024, 05:05 PM(IST)

Apple is criticised for under-reporting child abuse material on its platforms. Image courtesy: Apple and Andrea Piacquadio/ Pexels Photograph:( Others )

Story highlights

Apple launches nudity reporting for child users on iMessage: Apple has introduced a new iMessage feature, using which children can report if they come across nude images and videos on the messaging app. The feature aimed at child safety is being rolled out in Australia. Find out how it will work

Apple has launched a feature on its iMessage platform that will allow children to report nude images and videos to the tech firm that would then bring it to the notice of law enforcement agencies.

The change is being rolled out first in Australia from Thursday (Oct 24).

The beta release is part of an update of Apple's operating system and extends the safety measures earlier given to users aged under 13 since iOS 17.

Watch: Shocking Rise In AI-produced Child Abuse Material

Here is how the new feature aimed at further protecting user privacy will work:

The iPhones can automatically detect images and videos that contain nudity that children might receive or attempt to send in iMessage, AirDrop, FaceTime and Photos.

When such sensitive imagery is detected, the young users will be given intervention screens before they can proceed.

There will be options to contact a parent or guardian and to report the images and videos to Apple.

The device will then generate reports containing the visuals, as well as messages preceding and following the images and videos.

The report will include contact information of both the sender and receiver. 

Users can fill out a form to explain what happened, which Apple will review.

Action such as disabling the users' ability to send messages through iMessage will follow.

Apple can also report the issue to law enforcement agencies.

The tech giant said the feature will eventually be rolled out across the world after the beta test in Australia.

Australia was chosen as the first region in the wake of new rules being implemented in the country to crack down on controversial content on tech platforms and devices.

The new rules require companies to monitor child abuse and terror-related content on messaging and cloud services in Australia by the end of 2024.

While Apple maintains that the new features would protect end-to-end encryption and raised concerns about mass surveillance, the law has been softened a bit.

Yet, the Australian government insists that companies must take action to tackle child abuse and terror content.

Also read: New York Catholic diocese to pay $323mn to 530 child sex abuse survivors, in largest US diocese bankruptcy

Apple has drawn criticism across the world over its reluctance to ease up on its end-to-end encryption to help law enforcement.

There was earlier a plan from Apple to scan photos and videos stored in its iCloud for child sexual abuse material (CSAM), but that was scrapped in late 2022 over privacy concerns.

Also read: India trumps China as manufacturer of Apple's most expensive iPhone Pro and Pro Max models

In the UK, the National Society for the Prevention of Cruelty to Children had accused Apple of vastly undercounting how often CSAM appears in its products, according to a report in The Guardian newspaper.

Apple reported only 267 cases of suspected CSAM on its platforms in 2023.

That's seen as a massive under-reporting when compared to Meta, the owner of Facebook, Instagram and WhatsApp, which submitted more than 30 million CSAM reports.

Google sent nearly 1.5 million reports in the same period. 

(With inputs from agencies)

Read in App