
Google has unveiled a new software to help curb the spread of child sexual abuse material (CSAM) online. The technology that takes advantage of artificial intelligence (AI) to review CSAM content, “significantly advances” existing technologies, as per a Google blog post. It improves how companies review this content, thanks to deep neural networks for image processing. “We’ve seen firsthand that this system can help a reviewer find and take action on 700% more CSAM content over the same time period,” the post reads.
Google’s new software is different from existing ones as it not only sorts images that have been previously confirmed as CSAM, but also identifies new images quickly. To give a perspective, existing systems typically rely on matching against known CSAM to flag such content online. The new software can also target content that has not been previously confirmed as CSAM as well, which helps reviewers prioritise the most likely CSAM content for review by sorting through many images.
“Today we’re introducing the next step in this fight: cutting-edge artificial intelligence (AI) that significantly advances our existing technologies to dramatically improve how service providers, NGOs, and other technology companies review this content at scale,” reads the company’s blog post. Google is making available this technology for free to NGOs as well as its industry partners via their Content Safety API.