Debating solutions: Experts discuss incentives, regulation, and social media’s role in combating child-on-child sexual abuse

Malay Mail
Malay Mail

KUALA LUMPUR, June 20 — Amid the distressing narrative surrounding child abuse, recent reports have revealed a troubling trend: minors are increasingly becoming perpetrators of sexual abuse against other children. Shockingly, some victims are below 10 years old.

In 2022, the Disrupting Harm in Malaysia report examined online child sexual exploitation and abuse, revealing that 24 per cent of children in Malaysia had unintentionally come across sexual content online, including in advertisements, social media feeds, search engines, and messaging apps. However, 17 per cent admitted actively seeking such material.

The report also found that children aged 16 and 17, particularly boys, were most likely to be exposed to sexual images and videos online.

The report was produced by ECPAT, an international NGO network dedicated to fighting the sexual exploitation of children, together with Interpol and the United Nations International Children’s Emergency Fund (Unicef).

Making social media giants step up

Previously, national daily The Star reported that social media platform providers Meta and TikTok pledged their commitment to ensuring online safety measures for children under 13 years old.

But can more be done, such as incentivising social media operators?

“Any method used to incentivise online social networks must first and foremost impact their revenue stream. Money talks, and unless there is a major impact on their revenue, they will not be incentivised to make important decisions,” said Aaron Mokhtar, a certified information security awareness manager and founder of Digital Ihsan, to Malay Mail.

Highlighting Facebook as an example, he added that due to the potential hit on its advertising revenue, the social media operator does not have a “great record” in preventing child exploitation.

Last year, in announcing a lawsuit against Meta Platforms and its CEO Mark Zuckerberg, New Mexico Attorney General Raúl Torrez claimed that Facebook and Instagram steer children to explicit content even when no interest is expressed, enabling child predators to find and contact minors.

CBS News reported Torrez as alleging that predators pressured children into providing explicit photos or participating in pornographic videos. He also alleged that instead of providing “safe spaces for children,” the platforms allow predators to trade child pornography and solicit children for sex.

However, Meta refuted these allegations, stating that it uses various measures to prevent sexual predators from targeting children. CBS News reported that Meta claimed to have disabled over 500,000 accounts in a single month for violating its child safety policies.

Aaron said that it would be challenging for Malaysia to incentivise foreign social networks, owing to the limited powers that can be wielded against social media companies that are headquartered beyond its national borders.

“But if you’re looking to incentivise social media networks, some of the methods would be to enforce strict compliance with existing child protection laws and introduce new regulations that mandate social media platforms to implement robust child safety measures.

“Significant fines and other penalties should also be imposed for platforms that fail to protect children adequately. This creates a financial disincentive for neglecting child safety,” he added.

Aaron also called for tax breaks, grants, or other financial incentives for platforms that demonstrate a commitment to child safety through innovative technologies and comprehensive safety measures, and to provide funding for research and development in child safety technologies and encouraging platforms to invest in better monitoring and protective measures.

Alarming trends

In an interview with The Star in April, Assistant Commissioner of Police Siti Kamsiah Hassan, the principal assistant director of Bukit Aman’s Sexual, Women, and Child Investigation Division (D11), revealed that there were 912 reported cases of sexual crimes involving underage suspects last year.

These included 601 instances of rape, 17 cases of outrage of modesty, 18 incidents of unnatural sex, 23 reports of sexual harassment, and three cases related to the distribution or possession of obscene materials.

Of the 912 cases, 20 underaged suspects were involved in attempts to use a child for child pornography.

AI and safety measures

To effectively address the issue, Aaron called for social media platforms to provide parents with the knowledge and tools they need to educate their children, which includes parental control features.

For this, he said such platforms must be incentivised to develop and enhance user-friendly parental control features that allow parents to monitor and manage their children’s online activities effectively.

“Incentives could encourage platforms to develop and implement advanced monitoring tools that detect and report inappropriate behaviour. This might include artificial intelligence or AI-driven content moderation, real-time flagging systems, and improved reporting mechanisms for users.

“Platforms could be incentivised to collaborate closely with law enforcement agencies. This would facilitate quicker response times to incidents of abuse and ensure that offenders are promptly identified and prosecuted.

“Platforms might be encouraged to conduct regular safety audits, ensuring that their policies and technologies are up to date and effective in protecting children. These audits could be tied to financial or reputational incentives,” he added.

Cybersecurity Malaysia chief executive, Datuk Amirudin Abdul Wahab, supported the notion of incentivising online platforms, which he said can “be a powerful tool that complements stricter legislation and educational efforts.”

Amirudin said that the focus should be on creating a win-win situation where platforms benefit from a safer environment while fulfilling their social responsibility.

He said that incentives can encourage collaboration between platforms to share successful strategies and best practices in combating child-on-child sexual abuse.

“This can also help in the development of programmes that recognise and reward users who consistently report abuse or promote online safety. This incentivises responsible user behaviour and creates a safer online community,” he added.

Balancing child safety and freedom of expression

However, Amirudin said that it is also crucial to note that incentives alone are not the “silver bullets.”

“The design of incentive programmes should be transparent, with clear metrics for measuring progress and ensuring platforms aren’t just gaming the system to qualify for rewards.

“Finding the right balance between child safety measures and freedom of expression is also vital. Incentives should not restrict legitimate online discourse,” he added.

Amirudin said that while it is not a perfect solution, safety technology can be a powerful tool in the fight against online crimes involving children.

These include content moderation, cyberbullying detection using AI, user tracking and identification, as well as automated reporting.

While social media platforms have deployed artificial intelligence (AI) to detect and remove exploitative content involving minors, the evolving tactics used by predators necessitate continuous refinement of these AI algorithms.

“Continuous training and refinement of AI algorithms are needed as this can reduce false positives and negatives, improving accuracy in detecting harmful content.

“AI can be designed to adapt to new evasion tactics by learning from new data and updating its detection methods accordingly, while sharing AI models and threat intelligence among platforms can create a unified front against evolving threats,” he added.

Jurisdictional hurdles

Amirudin said that implementing AI with strong ethical guidelines is also needed to help balance the need for safety with the protection of user privacy, ensuring responsible use of technology.

He highlighted other loophole considerations, which include jurisdictional challenges which limit the scope of enforcement activities and rapid technological changes which may disable laws from keeping up with the fast-evolving digital landscape.

“Balancing user privacy with the need for monitoring and protection can be complex and ensuring that platforms are held accountable for abuse occurring on their services is still an evolving area,” he added.

Cybersecurity specialist Keith Rozario, however, disagreed with giving social media platforms incentives to curb child-on-child sexual abuse issues and called for regulatory measures instead.

“The social media platforms don’t really need incentives; they’re multi-billion-dollar, near-trillion-dollar-worth. There’s nothing any single government can do to financially motivate them. What we need is regulation to help.

Echoing Amirudin, Rozario also pointed to jurisdictional challenges as being an obstacle.

“Challenge is that most, close to all these companies are based in the US. They’ve been ‘marinated’ in US law, especially the First and Fourth Amendments so it’s hard to create any checks,” he said.

The Apple example

Rozario also reminded that while child sexual abuse materials (CSAM) is the seemingly easiest area to approach, given its illegality in every jurisdiction, people still have a right to free speech and a right against unreasonable search and seizure.

“Apple tried to implement an on-device CSAM search, which was well thought out, but even then, they didn’t roll it out. Presumably due to public uproar. This was perhaps the most private version of CSAM detection devised, but even then, it was not implemented,” Rozario said.

In December 2022, technology website Wired reported Apple’s decision to abandon efforts to develop a privacy-preserving tool for CSAM on its iCloud photo storage platform. The controversial project, initially unveiled in August 2021, aimed to scan iCloud photos for CSAM without compromising user privacy.

However, it faced significant backlash from digital rights groups and researchers, who raised concerns over the potential for abuse and exploitation, leading Apple to pause the initiative in September 2021.

“While AI might be able to solve some of this, people have a right against unreasonable search regardless of whether that search is conducted by a robot or a human,” Rozario added.