It’s critical for healthcare sector entities that are considering – or are already – deploying generative AI applications to create an extensive threat modeling infrastructure, said Mervyn Chapman, principal consultant at consulting and managed services firm Ahead and a former healthcare CISO.
“Before you deploy AI, understand what some of the potential attack vectors are and what controls need to be built to assess the vulnerabilities in that system. Build security in from the ground up,” Chapman said in an interview with Information Security Media Group.
“Make sure these controls are documented and they’re part of your standard risk assessment protocol,” he said.
In this audio interview with Information Security Media Group (see audio link below photo), Chapman also discussed:
- The more common generative AI uses emerging in healthcare today;
- Essential checks and balances for AI accuracy and integrity;
- The importance of stringent access controls for AI users, especially those who have access to the back-end of AI systems.
Chapman has over two decades of experience, including roles as CISO. He specializes in NIST, CIS and HIPAA compliance frameworks and has expertise in program development, incident response, vulnerability management, policy formulation and regulatory compliance.