SINGAPORE: News emerged in late November that over 100 Singaporean public servants, including five ministers, received extortionary emails with deepfake images. The messages demanded US$50,000 of cryptocurrency in return for not publishing “compromising” videos.
The emails contained purported screenshots of those videos showing the victim’s faces, which seemed to be taken from public sources such as LinkedIn.
This is not the first extortion plot against public servants in Singapore. Earlier this year, several members of parliament received threatening letters containing obscene images manipulated in a similar way.
Such incidents highlight concerns over the capabilities of artificial intelligence (AI) and its potential to augment blackmail attempts.
Similar attempts have also occurred in Asia. An extortion scheme in November targeted male politicians in South Korea, where victims’ faces were superimposed on explicit images and ransom was demanded in return for keeping the altered images private.
In 2019, an alleged deepfake sex video targeting a Malaysian politician was circulated on WhatsApp.
The capacity for AI to create realistic content carries significant risks for harmful exploitation. With AI-powered tools being widely available, anyone can easily and rapidly create a deepfake, using tactics such as face swapping to switch an individual’s likeness with another.
Cybercriminals have also adopted deepfake technology for other malicious purposes such as investment scams. Such deepfakes affect not only politicians and celebrities but ordinary people.
Public figures such as politicians and businessmen are prime targets for deepfake extortion, given the wealth of images and videos of them available online. Malicious actors can utilise deepfakes not only for financial gain, but to obtain information or compromise their careers.
This is cause for concern due to the influence and sensitive data politicians and businessmen have access to.
Cybercriminals have also used similar strategies against ordinary people. In June 2023, the US Federal Bureau of Investigation (FBI) warned of “sextortion schemes” where bad actors create deepfake pornography using content posted on social media, to pressure victims either for payment or to send real explicit photos or videos of themselves.
Such blackmail attempts could cause severe reputational harm and mental distress to individuals. Victims are afraid of the potential embarrassment from not paying ransoms and having their deepfakes leaked online. Even though they are aware that the content is fake, there remains the fear that the public might believe otherwise.
Sadly, women make up the overwhelming majority of victims in deepfake pornography campaigns. A 2023 study by cybersecurity firm Security Hero analysing almost 100,000 deepfake pornographic videos found that 99 per cent of its victims were women. Another 2024 study by cybersecurity firm ESET UK revealed that nearly two-thirds of women worry about being a victim of deepfake porn.
In some respects, the authenticity of a video or image might not really matter to public perception. This calls to mind the notion of the “liar’s dividend”, where those who spread misinformation benefit from undermining credibility and casting doubt on what is being perceived.
Deepfakes are a powerful tool in persuading people to believe in events that never happened, and can be co-opted by malicious actors to further their goals. The mere suggestion of scandal can already damage a victim's reputation.
On the other hand, there is a risk that with the rise of deepfakes, those accused of misconduct could discredit legitimate photos and videos by alleging that they are manipulated.
This presents certain challenges. For instance, if a whistleblower reports evidence of wrongdoing by a corporate entity, the company in question could claim that the content is fake. Public uncertainty over truthfulness could result in diminishing levels of trust, increased scepticism and even cynicism about information online.
Advances in AI will make identifying deepfakes more difficult, further empowering them for malicious uses. Greater understanding of AI capabilities and the danger of deepfake sextortion will go a long way.
When all our lives are online, there is an abundance of content available for malicious actors to exploit. We can be more cautious of what we post online or limit our privacy setting on social media accounts to trusted friends and people we know. Reporting any sextortion attempts or activity to the police and relevant social media platforms is also a good first step.
In discerning whether something we see online is real or not, we can try to ascertain the motivation behind its creation and dissemination. One of the best strategies is to question content that elicits an emotional reaction.
As deepfake technology evolves and malicious actors adapt, it is crucial that we stay updated on the latest developments and remain vigilant to such online threats.
Dymples Leong is an Associate Research Fellow with the Centre of Excellence for National Security (CENS) at the S Rajaratnam School of International Studies (RSIS), Nanyang Technological University, Singapore.
Continue reading...
The emails contained purported screenshots of those videos showing the victim’s faces, which seemed to be taken from public sources such as LinkedIn.
This is not the first extortion plot against public servants in Singapore. Earlier this year, several members of parliament received threatening letters containing obscene images manipulated in a similar way.
Such incidents highlight concerns over the capabilities of artificial intelligence (AI) and its potential to augment blackmail attempts.
DEEPFAKE BLACKMAIL ON THE RISE
Similar attempts have also occurred in Asia. An extortion scheme in November targeted male politicians in South Korea, where victims’ faces were superimposed on explicit images and ransom was demanded in return for keeping the altered images private.
In 2019, an alleged deepfake sex video targeting a Malaysian politician was circulated on WhatsApp.
The capacity for AI to create realistic content carries significant risks for harmful exploitation. With AI-powered tools being widely available, anyone can easily and rapidly create a deepfake, using tactics such as face swapping to switch an individual’s likeness with another.
Cybercriminals have also adopted deepfake technology for other malicious purposes such as investment scams. Such deepfakes affect not only politicians and celebrities but ordinary people.
DEEPFAKES AS PART OF A CYBERCRIMINAL’S TOOLBOX
Public figures such as politicians and businessmen are prime targets for deepfake extortion, given the wealth of images and videos of them available online. Malicious actors can utilise deepfakes not only for financial gain, but to obtain information or compromise their careers.
This is cause for concern due to the influence and sensitive data politicians and businessmen have access to.
Related:
Cybercriminals have also used similar strategies against ordinary people. In June 2023, the US Federal Bureau of Investigation (FBI) warned of “sextortion schemes” where bad actors create deepfake pornography using content posted on social media, to pressure victims either for payment or to send real explicit photos or videos of themselves.
Such blackmail attempts could cause severe reputational harm and mental distress to individuals. Victims are afraid of the potential embarrassment from not paying ransoms and having their deepfakes leaked online. Even though they are aware that the content is fake, there remains the fear that the public might believe otherwise.
Sadly, women make up the overwhelming majority of victims in deepfake pornography campaigns. A 2023 study by cybersecurity firm Security Hero analysing almost 100,000 deepfake pornographic videos found that 99 per cent of its victims were women. Another 2024 study by cybersecurity firm ESET UK revealed that nearly two-thirds of women worry about being a victim of deepfake porn.
UNDERMINING CREDIBILITY AND SEEDING DOUBT
In some respects, the authenticity of a video or image might not really matter to public perception. This calls to mind the notion of the “liar’s dividend”, where those who spread misinformation benefit from undermining credibility and casting doubt on what is being perceived.
Deepfakes are a powerful tool in persuading people to believe in events that never happened, and can be co-opted by malicious actors to further their goals. The mere suggestion of scandal can already damage a victim's reputation.
Related:
On the other hand, there is a risk that with the rise of deepfakes, those accused of misconduct could discredit legitimate photos and videos by alleging that they are manipulated.
This presents certain challenges. For instance, if a whistleblower reports evidence of wrongdoing by a corporate entity, the company in question could claim that the content is fake. Public uncertainty over truthfulness could result in diminishing levels of trust, increased scepticism and even cynicism about information online.
PREVENTATIVE MEASURES
Advances in AI will make identifying deepfakes more difficult, further empowering them for malicious uses. Greater understanding of AI capabilities and the danger of deepfake sextortion will go a long way.
When all our lives are online, there is an abundance of content available for malicious actors to exploit. We can be more cautious of what we post online or limit our privacy setting on social media accounts to trusted friends and people we know. Reporting any sextortion attempts or activity to the police and relevant social media platforms is also a good first step.
In discerning whether something we see online is real or not, we can try to ascertain the motivation behind its creation and dissemination. One of the best strategies is to question content that elicits an emotional reaction.
As deepfake technology evolves and malicious actors adapt, it is crucial that we stay updated on the latest developments and remain vigilant to such online threats.
Dymples Leong is an Associate Research Fellow with the Centre of Excellence for National Security (CENS) at the S Rajaratnam School of International Studies (RSIS), Nanyang Technological University, Singapore.
Continue reading...