Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Regulation
OneTrust’s Ojas Rege Details Act Requirements, AI Governance Challenges
The first set of rules banning high-risk artificial intelligence systems under the European Union AI Act went into effect on Sunday. Starting this week, companies are now barred from deploying AI-driven emotion recognition in the workplace and schools.
See Also: AI Surge Drives a 40-1 Ratio of Machine-to-Human Identities
The ban is part of the EU’s phased approach to deploying the EU AI Act, the first-ever binding rule on AI development and deployment. Non-compliance with the regulation could lead to a fine of up to 35 million euros or 7% of a corporate annual turnover.
“The phased approach will give some time for companies to prepare, which is always good,” said Ojas Rege, senior vice president and general manager of privacy and data governance at OneTrust. The regulation will create a “domino effect,” similar to the rollout of the General Data Protection Regulation.
“The EU’s risk-based approach is already found to be influential with new legislations. What this means for companies is to be much more specific about the intended outcome of the AI systems will be,” Rege said.
In this video interview with the Information Security Media Group, Rege also discussed:
- What a phased approach of the EU rule means;
- Challenges companies are facing in complying with the EU AI Act;
- AI governance under U.S. President Donald Trump, and how U.S.-based companies are preparing for the EU AI Act;
- Areas of AI risk that the EU AI Act will regulate effectively.
Prior to OneTrust, Rege was the vice president of strategy at MobileIron and previously oversaw mobile product teams at Yahoo! and AvantGo. He has six mobility patents, including in the enterprise app store and BYOD privacy. Ojas is also a Fellow of the Ponemon Institute for Information Security Policy.