CEO Reacted On Europol Reveals That Criminals Are Using Ai For Malicious Purposes, And Not Just For Deep Fakes
Monday, November 23, 2020
Cybercriminals will use AI in multiple ways: as a weakness, since it can increase the potential attack surface, while other forms of AI, such as deep fakes, are being weaponised to attack.
A new report from Europol warns that new screening technology will be needed to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.
Experts Comments
Ilia Kolochenko
At ImmuniWeb, we have started to see proposals on the Dark Web related to implementation and maintenance of Machine Learning models. Cybercriminals have been leveraging Machine Learning (ML) and Artificial Intelligence (AI) for years already. Thanks to the growing abundance of different Machine Learning frameworks and data processing available at a very affordable price, Machine Learning has become omnipresent and easily accessible even to small cyber gangs.
At ImmuniWeb, we have started to see proposals on the Dark Web related to implementation and maintenance of Machine Learning models for a wide spectrum of criminal purposes, spanning from improving phishing campaigns and identity theft to smart WAF bypass and exploitation of web-based vulnerabilities undetectable by automated scanners.
Cybercriminals will likely outstrip cybersecurity companies in practical usage of ML/AI in the near future. Most of the outcomes will, however, unlikely bring substantial changes or novel major risks given that ML/AI is narrowly applied to accelerate, amplify and enhance existing attack vectors and techniques. Read Full Article
ComputerWeekly: NCSC issues retail security alert ahead of Black Friday sales
Information Security Buzz: Experts Reacted On News That Micropayments Company Coil Exposed Hundreds Of Customer Email Addresses