War on deepfakes: What Malaysia can learn from the rest of the world

Malay Mail
Malay Mail

KUALA LUMPUR, Nov 5 — The rise of deepfake technology has sparked debate over Malaysia’s need to develop laws aimed at combating scams and abuse aided by artificial intelligence.

Like many countries across the world, Malaysia is now seeing the potential dangers of deepfakes, or AI-generated media where a person’s face, voice, or body is digitally altered or replaced to create realistic video or audio clips.

Several high profile incidents have already occurred this year, with celebrities such as Datuk Seri Siti Nurhaliza Tarudin, athletes like Datuk Lee Chong Wei, and corporate figures like Petronas CEO Tan Sri Tengku Muhammad Taufik having their likenesses used in deepfakes promoting investment scams.

While Malaysia has tabled an Online Safety Bill to enhance cybersecurity in the country, it has yet to formalise its plans specifically to tackle deepfake amid the growing epidemic on online fraud and scams that are costing Malaysians hundreds of millions of ringgit annually.

However, some countries have introduced new laws to try and rein in the menace. Here are some examples:

Singapore: Laws against deepfake misinformation

Singapore’s new Elections (Integrity of Online Advertising) (Amendment) Bill represents one of the most proactive efforts in Southeast Asia to address the threat posed by deepfakes.

The bill aims to counter AI-generated content, particularly during elections, where a deepfake video could be used to falsely portray a political candidate as making controversial statements or committing illegal acts.

The law that prohibits the publication of content that depicts a candidate saying or doing something they never did.

However, the law only applies if the content is online election advertising; manipulated digitally, realistically show a false action or statement; and believable enough for the public to mistake it for reality.

Offenders will face criminal charges for publishing, sharing, or reposting such content.

South Korea: Combatting the deepfake pornography crisis

In 2020, South Korean introduced laws to criminalise the creation and sharing of sexually explicit deepfake content without consent, with the offence punishable by a maximum fine of KRW50 million (RM157,000) and prison sentences of up to five years.

According to a 2023 report by US cybersecurity firm Security Hero, 53 per cent of all deepfake pornography originates from South Korean images.

Some of the most frequently deepfaked artists are K-pop idols and actresses include stars from groups like Blackpink, BTS, and Twice. This trend also extends to well-known Korean actresses.

In 2019, a notorious sex ring was uncovered in South Korea that used deepfake technology to exploit dozens of victims, including minors. The ringleader, Cho Ju-bin, was sentenced to 42 years in prison, but the case raised questions about whether the legal system could keep up with the rapid evolution of deepfake technology.

United States: State-level action against non-consensual deepfakes

While the United States has no federal law directly governing deepfakes, some states have introduced legislation to combat the issue. For instance, New York has enacted laws that specifically ban the creation and distribution of non-consensual deepfake pornography. Violators can face up to a year in jail, and victims have the right to pursue civil damages.

The US Congress has also proposed several bills, including the No FAKES Act and the Preventing Deepfakes of Intimate Images Act that aim to establish a legal framework for dealing with malicious deepfakes that incite violence, facilitate criminal conduct, or interfere with elections.

Among recent incidents including billionaire Elon Musk, a supporter of Republican presidential candidate and former US president Donald Trump, posting a deepfake political ad of the Democratic presidential candidate and US Vice President Kamala Harris.

China: Labelling requirements

In China, concern over deepfake technology came to a head in 2019 with the ZAO face-swapping app, which was widely criticised for privacy violations. WeChat, one of China’s largest platforms, restricted the app, citing security concerns. This was a pivotal moment that pushed China to regulate deepfake technology more stringently.

Since then, China has proposed regulations aimed at controlling the use of AI-generated content, including deepfakes, such as the Cyberspace Administration of China’s "Deep Synthesis Provisions" that will require AI-generated content to be clearly labelled as such.

However, the penalties for non-compliance have not been specified.

China’s regulatory framework was partly inspired by concerns over the use of AI to create digital clones of celebrities and influencers. These clones are increasingly being used to push products 24/7, particularly in the country’s massive e-commerce market.

Indonesia: Ethical guidelines

Indonesia is also beginning to recognise the threat posed by deepfakes. While the country has yet to enact specific deepfake laws, its Financial Services Authority (OJK) released ethical guidelines for the use of AI in fintech applications in early 2024.

These guidelines are intended to hold fintech providers accountable for any harm caused by AI-generated content.