Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Standards, Regulations & Compliance
Madrid Touts Strategy for ‘Inclusive, Sustainable, Citizen-Focused’ AI
Spain is set to launch Europe’s first-ever artificial intelligence regulatory agency as the trading bloc finalizes legislation for continent-wide rules meant to mitigate risks and ban AI applications considered too risky for society.
See Also: Live Webinar | Unmasking Pegasus: Understand the Threat & Strengthen Your Digital Defense
The Spanish Ministry of Finance and Civil Service said the government’s goal is to create a framework for AI development that is “inclusive, sustainable, and centered on citizens.”
The agency will be part of the Ministry of Economic Affairs and Digital Transformation. In November 2020, the ministry published a strategy calling for Spain to lead global development of Spanish-speaking AI tools and characterizing AI as a productivity and efficiency booster for the public and private sectors.
Madrid said its announcement of a new agency anticipates the likely outcome of final negotiations between the European Parliament and member nations over the AI Act, a bill first proposed by the European Commission in April 2021. A version approved by the Parliament in June calls for a centralized national supervisory authority to oversee national implementation of the act in most areas, especially finance and law enforcement. The European Commission and European Council favor a more decentralized approach that would allow national governments to distribute enforcement authority across existing government agencies, although one agency would still be named as the national supervisory authority.
The European Commission is pushing for negotiations to conclude by the end of this year.
The proposed regulation classifies AI systems based on their risks. The Parliament expanded the list of proposed banned applications to include biometric identification systems in publicly accessible spaces, bulk scraping images to create facial recognition databases, and systems that use physical traits, such as gender and race, or inferred attributes, such as religious affiliation, to categorize individuals (see: Europe Closes in on Rules for Artificial Intelligence ).
Although the EU’s effort to regulate AI began in 2021, well before the unveiling of ChatGPT, the arrival of the chatbot spurred lawmakers into writing new provisions for developers of generative AI, including requiring disclosures of copyrighted data used to train algorithms.
The French data protection authority, which opened multiple investigations into ChatGPT, in May announced it will release a four-pronged action plan to promote privacy-friendly AI systems. In the U.S, the Biden administration is mulling the creation of an “AI accountability ecosystem, and Beijing is readying a censorship regime for generative AI.