OpenAI Launches Security Committee Amid Ongoing Criticism
Tuesday, May 28, 2024
AI – particularly in this relatively new era of generative AI – has generated almost as much security and safety concerns as it has excitement about its potential. Those concerns span everything from bias and discrimination in their outputs to hallucinations – made-up answers that are wrong – data security leaks and sovereignty compliance worries, and the use of the technology by threat groups.
It’s unclear whether the new Safety and Security Committee will ease any of those concerns. Ilia Kolochenko, co-founder and CEO of IT security firm ImmuniWeb, called OpenAI’s move welcome but questioned its societal benefits.
“Making AI models safe, for instance, to prevent their misuse or dangerous hallucinations, is obviously essential,” Kolochenko wrote in an email to Security Boulevard. “However, safety is just one of many facets of risks that GenAI vendors have to address.”
One area that needs even more attention than the safety of AI concerns the unauthorized collection of data from across the internet for training LLMs and the “unfair monopolization of human-created knowledge,” he argued.
“Likewise, being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative – the absolutely crucial characteristics of GenAI solutions,” Kolochenko noted. “In view of the past turbulence at OpenAI, I am not sure that the new committee will make a radical improvement.”
OpenAI said the new committee’s first step will be evaluating and improving OpenAI’s processes and safeguard over 90 days and then bring recommendations back to the full board, with OpenAI publicly sharing the recommendations that were approved. Read Full Article
Bloomberg Law: GenAI Lawsuits: Evidence Collection & Pretrial Discovery for Plaintiffs
CPO Magazine: Chinese State-Backed Hackers Suspected in Third Party Breach Impacting UK Armed Forces