Sophisticated technology fraud has reached alarming levels with even celebrities and professionals falling victim to scammers. Deepfakes pose a serious threat to the global financial and social order and require urgent redress. Chen Xiyun and Oasis Hu report from Hong Kong.
At first, it appeared to be just a normal corporate routine, but turned out to be one of the biggest artificial intelligence-related stings in Hong Kong’s history.
In January, an employee at the Hong Kong branch of London-based professional services giant Arup Group, apparently oblivious to what was to emerge as a meticulously executed scam, was led to believe he was talking to the company’s “chief financial officer” and other “colleagues” at a fraudulent video conference and duped into transferring HK$200 million ($25.7 million) through more than a dozen online transactions to fraudsters using fake video images and voices.
The victim was appalled when he learned a week later it was all an elaborate ruse — he was the only authentic company employee at the “conference”, while all the others were found to have been mock-ups.
READ MORE: Deepfake video scams prompt police warning
It was one of Hong Kong’s biggest AI swindles to have come to light and the first local case of scammers using multiple deepfake characters to target a single victim, distinguishing it from previous cases of similar deepfakes being deployed in one-on-one video scams.
Hong Kong police revealed that the Arup employee involved had earlier received social media messages and an email from the group’s “chief financial officer” who proposed a confidential transaction and meeting. Despite doubts, he went ahead with the web “conference” during which his suspicions waned once he saw the “chief financial officer” who appeared to resemble the company’s top executive, along with several “colleagues”.
The pre-recorded video conference was created by utilizing publicly available video clips and the impersonated officer’s voice. As the meeting was pre-recorded, there was no actual interaction between the employee and the fraudsters. After giving instructions, the scammers ended the conference and made further payment transfer instructions through instant messaging software.
Hong Kong police also unveiled two other deepfake cases. In May, a staff member of another multinational company lost HK$4 million after a sham video meeting in which a deepfake portrayal of a “chief financial officer” instructed the employee to transfer the money.
Another almost identical case occurred in 2023 — the first of its kind in the city. A local fraud syndicate was believed to have stolen the identities of victims and used an AI face-swapping program to apply for online loans amounting to HK$200,000 from finance companies. Police smashed the syndicate in August that year and detained nine suspects.
Between Nov 1 and May 31, the city saw 21 online deepfake video clips being used, some involving the impersonation of SAR government officials and celebrities.
One of the videos depicted Chief Executive John Lee Ka-chiu as an “investment promoter”, using a voice almost identical to his, telling viewers in a television program: “You only need to invest HK$2,000 and, at the end of the week, you’ll have about HK$60,000 in your account.”
A report released in May this year by Sumsub — an international AI-based identity verification platform — revealed a staggering upsurge in deepfake cases in the Hong Kong Special Administrative Region. The first quarter of 2024 saw a 10-fold rise, compared to the same period a year ago — an increase that far exceeded the global average growth rate of 2.5 times.
The report noted that Hong Kong’s financial technology sector has been hardest hit with a 220-percent rise in cyberattacks — the highest in the Asia-Pacific region. The agency has observed a significant uptick in deepfake cases, with a year-on-year increase of more than 245 percent. The findings indicated a substantial growth in deepfake fraud in countries like the United States, India, Indonesia, Mexico and South Africa. According to the report, China, Spain, Germany, Ukraine, the US, Vietnam and the United Kingdom had the highest number of deepfake cases in the first three months of this year.
Virus to city’s immune system
Chan Chung-man, general manager of the digital transformation division of the Hong Kong Productivity Council and spokesman for the Hong Kong Computer Emergency Response Team Coordination Centre, explains that deepfake scams involving a form of social engineering with social attributes are different from traditional cyberattacks that mainly target systems or data.
In deepfake scams, fraudsters pretend to be someone else to deceive the victim through videos and conversations to achieve their goals. The person being impersonated is usually an authoritative or prominent figure.
The prevalence of deepfakes in recent years indicates that related technology has become more mature and sophisticated, says Chan. The content is more mimicry, making it harder for a victim to discern the truth through a short video. Besides, deepfake tools have become increasingly accessible and easier to use.
With the ready availability of open-source tools and commercial software, even amateurs with limited technical skills can create high-quality deepfake content easily and cheaply. The fact that the cost of creating deepfakes is low contradicts common sense that making a fake video is supposed to be resource- and time-consuming.
Unaware that videos can be fake, people tend to think that “seeing is believing”, says Chan, noting there is inadequate public education concerning new technology scams. Historically, online scams primarily involved text-based methods like phishing emails and SMS (short message service) and society has considerably educated the public on how to detect such cheats. However, there has been a significant gap in providing relevant public education targeting new scams.
“When residents don’t know much about deepfakes, it’ll be easier for them to be duped. It’s similar to the body’s immune system that has no resistance to a new virus,” says Chan. As for the above-average growth rate of deepfake incidents in Hong Kong, he blames insufficient security measures and training for employees.
A survey conducted by the Office of the Privacy Commissioner for Personal Data, Hong Kong, and the Hong Kong Productivity Council Cyber Security in November measured the Hong Kong Enterprise Cyber Security Readiness Index — an indicator of companies’ ability to withstand and survive cyberattacks. The index pointed to a significant decline of 6.3 points, dropping from 53.3 in 2018 to 47 points in 2023 — the lowest in the past five years. The drop denotes that most local businesses lack sufficient network security measures.
The survey also noted that less than 30 percent of enterprises provided cybersecurity training for employees, and below 20 percent conducted regular cybersecurity drills.
Using AI to combat AI
Legislator Tan Yueheng, who’s also chairman of publicly-listed BOCOM International Holdings, believes the surge in the number of deepfake incidents in Hong Kong is due to its regional characteristics. The city is a global financial hub with high population mobility and has a huge amount of daily cross-border transactions in which diversified methods are used for identity verification, making it vulnerable to deepfake scams.
Tan warns that, with the advent of deepfake technology, the situation may deteriorate, potentially giving rise to new types of fraud, such as scams that combine multiple technologies. This could significantly make law enforcement and fraud prevention more difficult.
Hong Kong’s existing laws, such as the Theft Ordinance and the Personal Data (Privacy) Ordinance, are being used to counter deepfake fraud that involves deceptive business practices and identity theft. Offenders may face various charges and penalties that could lead to imprisonment and fines. However, there is still no dedicated law to curb deepfake technology and fraud. The legal loophole can present challenges and disputes for law enforcement agencies. It may also hinder the ability of deepfake fraud victims to protect their legal rights and seek compensation.
To adapt to the evolving technology, Tan suggests that relevant deepfake laws be enforced. From a global perspective, the US is drafting new legislation to ban the production and distribution of deepfakes impersonating individuals. Singapore recently enacted the Online Criminal Harms Act that empowers the authorities to deal more effectively with online activities that are criminal in nature. Also, various countries and international organizations, including the Chinese mainland, have implemented regulatory measures relating to AI to combat unlawful use.
However, Tan admits that creating such laws is not easy. “What would be the scope of curbing deepfake technology? Should deepfake technology developers or users who create false images by using deepfake technology be regulated? How should the issue of using deepfake videos to infringe copyrights be addressed and defined? All these questions should be clarified.”
Deepfake fraud is mostly perpetrated through e-commerce platforms or video calls on social podiums. Legislation should require online platforms to disable or restrict suspicious accounts, postings or web pages, Tan suggests. Rules should also be applied, mandating internet service providers to block access to sensitive domains or requiring app shops to remove fraudulent apps to prevent the public from suffering financial losses.
Given the syndicated, cross-border and technologically-advanced nature of deepfake cases, strengthening regional cooperation among law enforcement agencies is also essential, Tan added.
Tian Feng, dean of the SenseTime Intelligent Industry Research Institute, proposes using AI to combat AI-related crime. He says advanced technologies already exist that can be used to tackle deepfake scams. One such technology available in the market is “digital decoupling” that can add a layer of digital interference to images, rendering them unreadable by deepfake tools, thus deterring fraudsters from using online photos and videos to generate manipulated content.
Moreover, AI-detection technology can also play a crucial role. These tools can identify whether videos or images have been processed by AI devices or determine if the content is AI-generated.
ALSO READ: City may need laws to tackle social media fraud
Social media applications can leverage these AI detection tools to assess the authenticity of the content on their platforms. If suspected AI-generated content is detected, such applications can provide pop-up alerts to users to reduce the risk of deception or scams.
On the mainland, platforms like Douyin and Xiaohongshu have deployed such technology that promptly alerts users when AI-generated content is detected. Tian suggests that more online platforms use such technology to alert users. However, technology merely serves as a medium in deepfake fraud, with the victim’s lack of vigilance helping scammers to succeed, he says, adding it’s crucial and pressing to promote public awareness.
Given that most people are unfamiliar with the advancements made in AI, such as deepfakes, it’s vital to disseminate knowledge about the technology, warn of its potential consequences when misused, and encourage residents to take preventive measures.
Efforts to educate vulnerable groups, such as children, the elderly and the low-income households, should be stepped up as they are less exposed to the latest technologies, says Tian.