As Singapore faces deepfake surge, authorities warn of threats from cybercrime, online scams

8 Min Read

SINGAPORE — Deepfakes in Singapore have jumped five times in the last year alone, and authorities have warned that the technology could be misused to commit cybercrimes.

The Sumsub Identity Fraud Report 2023, which was released last month, also showed a 10-fold increase in the number of deepfakes detected globally across all industries from 2022 to 2023.

Deepfakes refer to media that has been altered by artificial intelligence (AI), using a technique called facial re-enactment to manipulate the performance of a subject in an existing video.

Such digital tools are getting more accessible and sophisticated, making it easier than ever to create deepfakes, said experts.

“The scammers can also use it to their advantage,” said Mr Kevin Shepherdson, CEO and founder of data privacy research firm Straits Interactive, adding that it introduces a new aspect to crime.

“If I were a scammer, I would use generative AI to create fake job posts, then take someone else’s LinkedIn information and do phishing. So I will target the person, and I can use the technology to create a fake-looking HR (human resource) manager so that the victim would think that he or she is communicating with a real person.”

Professor Mohan Kankanhalli, dean of the National University of Singapore’s School of Computing, told CNA that generative AI has “democratised the creation of deepfakes” and there will only be more deepfake content produced at a faster rate in the future.

He stressed that regulation over the sector will have to keep up with developments, and this can be done through a risk-based approach, or to aim to minimise harm.

“A risk-based approach – I think this is what many of the European countries are trying to do – is trying to understand what are the risks posed by different types of deepfakes and then regulate at two levels,” said Prof Kankanhalli, who is also deputy executive chairman of AI Singapore.

At an individual level, it would mean imposing penalties on the creators of deepfake content, while at a platform level, it would require companies to act when they are notified that such content is being shared on their platforms.

Some countries are also pursuing even stronger regulation, said Prof Kankanhalli. For instance, China is making companies disclose software which is used for creating deepfakes, along with their recommendation algorithms.

Prof Kankanhalli said tackling the problem of deepfakes is “going to be extremely challenging”.

When the first generation of deepfakes came out, they were fairly easy to detect as there were imperfections such as eyes that did not blink, he said. However, scammers identified the weakness and improved the software.

Generative AI is on the whole “becoming better and better” and quickly improving over time, said Prof Kankanhalli.

He emphasised that regulators need to understand the technology, and cited Singapore’s Infocomm Media Development Authority (IMDA) as a body that has been very much involved in the AI scene and aware of developments.

“I think technologies have to work with regulation. However, in the long run, I think it is going to be a cat-and-mouse game, where regulation will always be chasing technology,” he said.

“In the long run, we therefore have to look at educating people about the risks of seeing and believing, and the risks of creating such software, and making people responsible and ethical about the usage of such software.”

Another area with a rising need for policies to govern the use of AI is small- and medium-sized enterprises (SMEs).

The use of generative AI is gaining traction in Singapore’s business community, with a survey by Straits Interactive revealing that about two in three SMEs are either using or planning to use the technology to generate texts, documents and videos.

Software developer Ambient Singapore, for instance, uses generative AI to build websites and write proposals for clients, boosting productivity by almost 50 per cent.

“Our junior executives use ChatGPT to generate proposals (and) generate reports for our clients,” its CEO Ivan Chong told CNA.

“Previously, we had to check their work. But right now, there’s much less checking because the grammar is taken care of. They just have to put in the facts, put in the figures, and then it’s very easy for them. And that saves us a lot of time.”

Noodle Factory, an AI-powered teaching assistant, also relies on the new technology for its business, letting users feed content into the software to generate test questions.

Its CEO Yvonne Soh said educators can create AI tutors without requiring any knowledge of coding, and curate their own content simply by dropping in their materials.

“As a result, students will be able to chat with these AI tutors who already have all the knowledge that the teacher has,” said Ms Soh.

While there are benefits to using AI, privacy breaches are not uncommon, with information such as contact details, usage patterns, device information and location all potentially exposed.

Mr Ang Yuit, vice-president of the Association of Small and Medium Enterprises, told CNA: “Some companies are putting, let’s say, their HR policies (and) some of their legal stuff inside to try to get the AI to resummarise or recategorise or rewrite some parts of it, and you may have to deal with copyright issues or personal data issues.”

The issues potentially arise as most firms rely on open source data, where the source code is freely available and can be redistributed or modified by anyone.

To reduce cybersecurity risks, SMEs told CNA they are taking precautions such as limiting the information that they upload.

“When institutions are putting in their content, their personal data and their private copyrighted data, we want to make sure that it’s not shared,” said Ms Soh.

“Any content that customers put into the platform is always containerised. And it’s never used to train any large language model or any AI model that is not used by that institution.” CNA

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.