Policy and Regulatory Frameworks
The EU AI Act
- First comprehensive AI regulation in the world
- Adopted by European Parliament in March 2024
- Risk-based approach to AI regulation
- Focuses on transparency, safety, and fundamental rights
- “Brussels Effect”: Influence on other jurisdictions, including Colorado
EU AI Act: Risk Categories
The Act classifies AI systems based on risk level:
- Unacceptable Risk: Banned outright (e.g., social scoring, manipulative AI)
- High Risk: Strict requirements (e.g., critical infrastructure, education, hiring)
- Limited Risk: Transparency obligations (e.g., chatbots, emotion recognition)
- Minimal Risk: Minimal regulation (most AI applications)
EU AI Act: High-Risk Use Cases
The following AI applications are considered high-risk and subject to strict requirements:
- Biometrics: Remote identification systems, categorization systems, emotion recognition
- Critical Infrastructure: Safety components for traffic, utilities, digital infrastructure
- Education: Systems determining access to education, evaluating outcomes, monitoring tests
- Employment: Recruitment tools, task allocation, performance monitoring, promotion/termination
EU AI Act: High-Risk Use Cases
- Essential Services: Eligibility assessment for benefits, credit scoring, insurance pricing
- Law Enforcement: Crime risk assessment, evidence reliability evaluation, profiling
- Migration & Border Control: Risk assessments, application examination, identification
- Justice & Democracy: Legal interpretation, dispute resolution, election influence
EU AI Act: What must you do if you’re ‘high-risk’?
- Establish a risk management system throughout the lifecycle
- Implement data governance - test for representativeness in datasets; “to the best extent possible”, free of errors
- Create detailed technical documentation for others to determine compliance and risk
- Design systems for automatic record-keeping of risk-relevant events
- Provide clear instructions for use to downstream deployers
- Enable human oversight in system design
- Ensure appropriate accuracy, robustness, and cybersecurity
- Establish a quality management system for compliance
EU AI Act: Foundation Model Requirements
Special provisions for general-purpose AI models (GPAIs):
- Technical documentation and risk assessments
- Copyright compliance for training data
- Energy efficiency reporting
- Stricter rules for “systemic risk” models (“when the cumulative amount of compute used for its training is greater than $10^25$ floating point operations”)
“Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.”
Discussion
- What are the potential positive impacts of the legislation as implemented in the EU AI Act?
- What are the potential concerns the EU AI Act?
- Who benefits? Who is harmed?