Chinese authorities warn public against AI fraud

In a recent case, the victim was tricked into initiating a total transfer of 4.3 million yuan via a company account after having a video chat with someone masquerading as his friend.

646fc5a8a310b60580cdb50c.jpeg

May 26, 2023

BEIJING – Public security departments in multiple cities have recently urged people to be more vigilant after several cases of AI fraud, in which artificial intelligence tools were used to mimic people’s voices and appearances, were exposed.

In a case recently revealed by the public security bureau of Baotou, Inner Mongolia autonomous region, the victim was tricked into initiating a total transfer of 4.3 million yuan ($612,000) via a company account after having a video chat with someone masquerading as his friend.

With the cooperation of the bank, police were able to halt a transfer of 3.37 million yuan, but said they are still trying to recover 931,600 yuan that had already been sent.

On April 20, a man surnamed Guo, a legal representative of a technology company in Fuzhou, Fujian province, received a video call via WeChat from a scammer. Using AI to masquerade as a friend of Guo’s, the scammer asked him for help.

Guo’s “friend” said he was bidding on a project in another city and wanted to use Guo’s company’s account to submit a bid of 4.3 million yuan. He promised to pay Guo immediately.

The scammer then sent a bank account number to Guo and after that provided a screenshot of the bank transfer voucher to prove that he had transferred the money to Guo’s company’s account.

Guo then transferred 4.3 million yuan to the provided account in two separate payments.

When Guo called his real friend for verification after completing the transfers, his friend denied that he had made a video call to Guo or had asked him to transfer any money.

“He chatted with me via video call, and I also confirmed his face and voice in the video. That’s why we let our guard down,” Guo added.

The case created a buzz on social media platforms and shocked netizens, who said that people who are trusting and lack awareness, such as children and seniors, are especially vulnerable to such a high-tech scam. They called on the government to strengthen the management of related technologies and crack down on crimes.

On May 24, the Internet Society of China also issued a statement about the AI face-swapping scam. It said that as “deepfake” technology — which is often used to mimic the voices and appearances of others in videos and audio recordings — becomes more freely available, related products and services have gradually increased.

It is becoming increasingly common for some people to use AI for criminal purposes, including fraud and slander, and people need to be vigilant and strengthen their awareness to keep themselves from becoming victims of scams, the statement said.

People should be more aware of personal information protection and not be so quick to provide images of their faces, fingerprints and other biometric data to strangers. They should not disclose details of their identity cards, bank cards, verification codes and other similar information, it said.

It reminded people to carefully manage their social media accounts, especially when logging in on unfamiliar devices, to prevent private information from being stolen.

In the times of AI, texts, voices, images and videos are all likely to be deeply synthesized. In cases such as Guo’s, it is necessary to verify information by additional means such as by calling the person involved in the transfer, rather than just directly transferring money after communicating by text or other means without verification, no matter who the other party is, it said.

In November, the Cyberspace Administration of China, the Ministry of Industry and Information Technology and the Ministry of Public Security jointly issued the provisions for the administration of in-depth synthesis of internet information services, making clear constraints on the generation, replacement and manipulation of human faces, as well as on the synthesizing and mimicking of human voices.

Some platforms and enterprises have launched initiatives to ban the use of generative AI technology to create and publish infringing content, including but not limited to portrait rights and intellectual property rights.

Zhou Linna, a professor at the School of Cyberspace Security of Beijing University of Posts and Telecommunications, said “AI face- swapping” is becoming a more common tool online fraud and may fuel distrust and more fear in society.

“AI technology is a new thing and can be very convenient, but it could also affect our lives in negative ways,” Zhou said. “It is necessary to improve and create laws and regulations to properly use and govern such technology.”

scroll to top