Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Ban on Prohibited AI Applications to Be Implemented First
The European Union will enforce its imminent regulation on artificial intelligence in phases, and bans on the use of prohibited AI are expected to kick in within six months of the regulation’s adoption, a European Commission official said on Monday.
See Also: Entering the Era of Generative AI-Enabled Security
Lawmakers and officials reached a political agreement on the landmark AI Act just minutes before midnight in Brussels. The regulation bans the use of AI for social scoring, the scraping of facial images from the internet or CCTV footage, and emotion recognition in the workplace and schools (see: Europe Reaches Deal on AI Act, Marking a Regulatory First).
The regulation, which now waits for formal acceptance by the Parliament and Council – a step seen as a formality – is set to be fully effective starting in 2026. Once fully enforced, any violation of the stipulated rules will invoke a fine of 7% of global revenue for noncompliance.
At a press briefing on Monday, a commission official said the regulation will likely be codified into law in April. Bans on prohibited AI applications, such as for social scoring, will be the first regulations to come into effect, the official said. The official spoke on condition of not being identified.
Member states will have 12 months to set up a national AI governance structure. Within 24 months, companies – including foundation model developers – will have to submit documentation, such as transparency reports detailing content used to train AI models – including whether they comply with European Union copyright law. Developers of high-risk AI systems will need to provide additional documentation on the risks posed by their models, as well as the safety and privacy measures undertaken to mitigate those threats.
On the cybersecurity measures, the official said, AI regulations will integrate measures detailed in the Cyber Resilience Act, which makes vulnerability disclosure and patching mandatory for all software and hardware products available in the EU market.
“We have tried our best to make these two regulations interlinked and connected to avoid duplication,” the official said. “So, basically, if under the Cyber Resilience Act a product or digital products has been tested and ensured compliance, this assessment will be considered compliant under the AI Act’s cybersecurity measure.”
These measures will be imposed using a new “code of practice” that will be released by the commission. “Once the codes of practice are endorsed by the commission, the companies may rely on them for ensuring conformity with the regulation,” the official said.
Concerns about the interval until enforcement have led some member states to argue that high-risk systems will enter the EU market in the interim (see: EU Artificial Intelligence Act Not a Panacea for AI Risk).
The official downplayed those concerns, stating that the commission has already begun to enroll companies into its voluntary AI Pact.
Despite the AI Act’s supporters’ assurance that the regulation preserves fundamental rights, privacy groups have criticized the lawmakers for only partially banning predictive policing and facial recognition.
Responding to the criticism, the official urged naysayers to not be “too critical” of the regulation.
“In some respects, we would like the regulation to be a bit wider in scope because we want to make it future-proof owing to the fast development of the technology,” he said, adding that this will leave “some room” for companies to conduct “practical experiments.”