Employee duped into wiring $25m of company funds by video deepfake scam
Tuesday, February 6, 2024
Businesses must create cybersecurity protocols for protecting company assets and employees from increasingly sophisticated deepfake threats.
Dr Ilia Kolochenko, CEO and chief architect at ImmuniWeb, told Verdict that generative AI (GenAI) is a gift for scammers and hackers, but it is unlikely there will be a spike in GenAI-bolstered cyberattacks.
“Abundance of freely available GenAI solutions to generate texts voice and video provides scammers with unprecedented opportunities to trick their victims to pay money, disclose trade secrets or sensitive financial information to third parties, or even manipulate financial markets, let alone interference with politics and elections,” Kolochenko said.
The UK, which recently outlined its Online Safety Bill, made an amendment in 2022 to make non-consensual deepfake pornography illegal in the country.
Kolochenko believes that regulations are not enough to properly stop the creation and spreading of deepfakes.
“What we really need is to add AI-content detection mechanisms to all major social networks and platforms where users can share content, as well as integrating detection of AI-generated content to spam filters, so all non-human content will be visibly marked as such,” Kolochenko added. Read Full Article
Dark Reading: Pegasus Spyware Targets Jordanian Civil Society in Wide-Ranging Attacks
SecurityWeek: FTC Orders Blackbaud to Address Poor Security Practices