Insights
Artificial Intelligence in the EU and Malta: Legal Overview and Recent Developments
By Dr Kelly Fenech, Advocate – GKF Legal
Artificial Intelligence (AI) is rapidly transforming various sectors, from finance to healthcare, raising complex legal and ethical questions. In response, the European Union (EU) and Malta have introduced regulatory frameworks to ensure the responsible development and deployment of AI technologies.
1. The European Union’s Artificial Intelligence Act
The EU’s Artificial Intelligence Act (AI Act) is a pioneering regulation aimed at establishing a comprehensive legal framework for AI. The Act classifies AI systems into four risk categories:
- Unacceptable risk: AI systems that pose a clear threat to safety, livelihoods, and rights of people, such as social scoring by governments, are prohibited.
- High risk: AI systems that significantly impact people’s rights, including biometric identification and critical infrastructure, are subject to strict obligations.
- Limited risk: AI systems with specific transparency obligations, such as chatbots, must inform users that they are interacting with AI.
- Minimal risk: AI systems with minimal or no risk, which can adopt voluntary codes of conduct.
The AI Act imposes stringent requirements on high-risk AI systems, including:
- Establishing risk management systems.
- Ensuring data quality and documentation.
- Implementing human oversight.
- Maintaining transparency and accountability.
Non-compliance with these obligations can result in significant penalties, including fines up to €35 million or 7% of global turnover, whichever is higher.
2. Malta’s Approach to AI Regulation
As an EU Member State, Malta is bound by the AI Act. The Malta Digital Innovation Authority (MDIA) plays a central role in the implementation and enforcement of the AI Act within Malta. The MDIA collaborates with sectoral regulators, such as the Malta Financial Services Authority (MFSA), to ensure that AI systems comply with both EU and national regulations.
Malta’s regulatory framework emphasizes:
- Operational resilience: Continuous monitoring and validation of AI systems to detect performance issues and unintended consequences.
- Cybersecurity: Aligning risk management with the MFSA’s ICT and cybersecurity guidelines to maintain strong controls over AI systems.
- Ethical considerations: Promoting transparency and accountability in AI deployment to protect fundamental rights.
3. Key Legal Considerations for Businesses
Businesses operating in the EU and Malta must address several legal considerations when deploying AI systems:
- Data protection: Compliance with the General Data Protection Regulation (GDPR) is essential, particularly concerning data minimisation, purpose limitation, and data subject rights.
- Liability: Determining accountability for AI-driven decisions, especially in high-risk sectors like healthcare and finance.
- Intellectual property: Navigating issues related to the ownership of AI-generated outputs and the protection of underlying algorithms.
- Transparency: Ensuring that AI systems are explainable and that users are informed about AI interactions.
4. Future Developments
The regulatory landscape for AI is evolving. The AI Act’s provisions are being phased in, with full enforcement expected by 2027. Businesses should stay informed about:
- Regulatory updates: Monitoring changes in EU and national regulations affecting AI.
- Best practices: Adopting industry standards and guidelines to ensure compliance.
- Technological advancements: Keeping abreast of developments in AI technologies that may impact legal obligations.
Conclusion
AI offers significant opportunities but also presents complex legal challenges. Businesses in the EU and Malta must navigate this evolving landscape by implementing robust compliance frameworks, ensuring transparency, and addressing ethical considerations. Legal advisors play a crucial role in guiding organisations through these complexities, ensuring that AI deployment aligns with regulatory requirements and best practices.
At GKF Legal, we provide expert legal counsel on AI-related matters, helping clients navigate the regulatory landscape and harness the potential of AI responsibly.
Dr Kelly Fenech is a Founding Partner in GKF Legal’s Financial Services Practice, specialising in EU technology law. The views expressed are his own and do not constitute legal advice.