Total Tests:
Stay in Touch

Get exclusive updates and invitations to our events and webinars:


Your data will stay confidential Private and Confidential

AI-generated content appears in LinkedIn phishing scams, but can users spot it?

By Science Reporter for Evening Standard
Saturday, February 25, 2023

Creative AI tools such as ChatGPT aren’t just helping students to cheat with their coursework, they might also be helping cybercriminals cheat you out of your money and personal information.

Will AI make it easier to get scammed?

This is an issue that is likely to intensify rapidly over the next decade, according to Dr Ilia Kolochenko, adjunct professor of cybersecurity at Capitol Technology University.

In particular, Chat GPT-style tools may make it far easier for would-be hackers from non-English-speaking countries to craft convincing legible scam messages, according to Kolochenko, who called it a “gift” to this demographic. Many of the scam emails we’re all familiar with aren’t exactly Shakespearian in quality, so you can see how AI might be a step up for many groups looking to produce official-sounding documents.

“We have a lot of young, talented cyber criminals who simply don’t have great English skills,” explained the professor.

This isn’t the first time we’ve seen this type of thing, Kolochenko claims anecdotally he has seen other instances of phishing emails that looked like they had been generated using AI, or of cybercriminals using AI chatbots to generate communications with their victims. According to the professor, these cybercriminals will often pretend to be the tech support desk of a large tech firm, to extort payments.

How can I protect myself against AI fakes?

Kolochenko said you should be suspicious of written communications from fraudsters with perfect grammar and spelling, as humans make typos, type in lowercase, or use colloquial English in their emails.

“If you receive a text that looks too good to come from your colleague, who almost always writes very short and practically incomprehensible emails, you should ask yourself why,” he explained.

Kolochenko says consumers should be extremely weary of context-related cues such as if they receive an email supposedly from a US-based colleague at 9 am UK time.

Should social platforms step up to prevent AI fraud?

There is a lot that big-tech platforms like LinkedIn and Twitter can do to regulate AI-generated content on their platforms, according to Bores, but it’s unlikely they will take appropriate measures unless they are forced to by future legislation.

Kolochenko feels that, with the massive amount of investment that Microsoft has poured into AI technology, they should eventually be able to develop reliable ways of spotting AI-generated content. However, at the moment, it’s simply not possible for them to monitor their platform consistently.

If a high-profile fraud based on AI-generated material were to appear on a platform such as LinkedIn, such as a politician making a deepfaked inspiring speech, for example – and this went viral, that could well prompt the big-tech platforms to take action, according to Kolochenko. The professor also predicts that soon, the big social platforms will ban users from posting AI-generated content without a disclaimer as part of their terms and conditions. How this ban might be enforced is anyone’s guess. Read Full Article


Book a Call Ask a Question
Close
Talk to ImmuniWeb Experts
ImmuniWeb AI Platform
Have a Technical Question?

Our security experts will answer within
one business day. No obligations.

Have a Sales Question?
Email:
Tel: +41 22 560 6800 (Switzerland)
Tel: +1 720 605 9147 (USA)
*
*
*
*
Your data will stay private and confidential