Total Tests:
Stay in Touch

Get exclusive updates and invitations to our events and webinars:


Your data will stay confidential Private and Confidential

GDPR 2.0: What do Europe’s new AI rules mean for businesses

IT PRO
Monday, June 28, 2021

Not doing so will have similar consequences to the General Data Protection Regulation (GDPR), which became the de facto privacy standard for many of the world’s largest companies after it came into effect in May 2018. Those that break the AI rules face fines up to 6% of their global turnover or €30 million, whichever is the higher figure.

Risky business

At the macro level, the EU’s new rules take aim at “high-risk” AI systems, such as facial recognition, self-driving cars, and AI systems used in the financial industry. In these areas, those deploying AI systems will need to undertake a risk assessment and take steps to mitigate any dangers; use high quality data sets to train the system; log activity so that AI decisions can be recorded and traced; keep detailed documentation on the system and its purpose to prove compliance with the law to government regulators; provide clear and adequate information to the user; have “appropriate human oversight measures”; ensure a “high level of robustness, security and accuracy.”

While this list of steps has been applauded by those with a keen eye on privacy, it’s unlikely they’ll be welcomed so fondly by those who have to ensure these measures are put in place. Ilia Kolochenko, CEO of ImmuniWeb, a global application security company that develops AI and ML technologies for SaaS-based application security solutions, believes the stringent requirements will be “arduous to implement it in practice.”

“For instance, assessment of high-risk AI systems will be a laborious and costly task that may also jeopardise many trade secrets of European companies,” he tells IT Pro. “Moreover, most of the AI systems are non-static and are continuously improved, thus new regulation will unlikely provide even a 90% guarantee that the system will remain adequate after the audit.

“Furthermore, the requisite explainability and traceability of AI output is oftentimes technically impossible. Finally, isolated AI regulation leaves the door widely open for traditional software offering the same capacities in high-risk areas of operations. In a nutshell, this timely idea certainly deserves further discussion and elaboration, however, practicality will be the key to its eventual success or failure.” Read Full Article


Book a Call Ask a Question
Close
Talk to ImmuniWeb Experts
ImmuniWeb AI Platform
Have a Technical Question?

Our security experts will answer within
one business day. No obligations.

Have a Sales Question?
Email:
Tel: +41 22 560 6800 (Switzerland)
Tel: +1 720 605 9147 (USA)
*
*
*
*
Your data will stay private and confidential