Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Successful AI Implementation Requires a Secure Foundation, Attention to Regulations
The private sector’s frenzy to incorporate generative artificial intelligence into products is leading companies to overlook basic security practices, a Google executive warned Tuesday.
See Also: Live Webinar | Best Strategies for Transferring Sensitive Financial Data
“Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong,” said John Stone – whose title at Google Cloud is “chaos coordinator” – while speaking at Information Security Media Group’s London Cybersecurity Summit.
Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said.
“There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about.”
Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.
“Depending on the industry, there might be huge regulatory concerns regarding applications of AI, especially in financial services, which process sensitive financial data and personal information,” Chakraborty said.
In the wake of increased regulatory focus, especially with the passing of the proposed European Union’s AI Act, more organizations will pivot toward applications similar to ChatGPT that are trained on private data, he said.
“That is more secure, plus you can train it with your own private data and keep it within your own ecosystem. So these private models are going to be the future.”
For “safety and risk critical” businesses such as aerospace, the use of AI is currently limited to “decision-supporting” and not “decision-making,” said Adam Wedgbury, head of enterprise security architecture at Airbus.
“Internally, it’s very difficult to use AI at the moment for anything to do with engineering. Again, our dilemma is: Should it be security for AI or AI for security?”