Are you wondering who will pay for mistakes made by algorithms in your company and how this will realistically affect your responsibilities? The upcoming AI Act is a revolution in AI legal regulations, introducing precise rules of the game and establishing who bears liability for artificial intelligence. From this article, you will learn how to classify the AI systems you use, what obligations the new law imposes on you, and how to prepare your organization for the upcoming changes to avoid severe penalties.
Introduction
2. Risk categories in the AI Act: What this means for your company
3. Liability for AI: Who pays for algorithm mistakes?
4. How to prepare your company for the AI Act: A practical guide for IT leaders
5. Ethics in AI: More than regulatory compliance
We are entering an era where artificial intelligence is no longer just a technological novelty but is becoming the foundation of business and operational transformation. For CIOs, this is a time of fundamental change, where strategic AI implementations must go hand in hand with growing legal and ethical awareness. The days of perceiving AI as a "black box" are over. The upcoming AI legal regulations, led by the EU's AI Act, introduce a new era of accountability, forcing organizations to deeply understand and manage the risks associated with the algorithms they use.
For IT leaders, this is not just a challenge, but above all, an opportunity to build a lasting competitive advantage based on trust and transparency. Understanding who is liable for artificial intelligence errors and how the new law will affect the development and maintenance of systems is becoming a key competence. This article serves as a guide through the intricacies of the AI Act, focusing on the practical implications for companies, especially IT departments and software houses.
The Regulation on artificial intelligence, commonly known as the AI Act, is the world's first such comprehensive attempt to regulate the dynamically developing field of AI. The European Union's initiative aims to create a uniform legal framework that will ensure safety, respect for fundamental rights, and build trust in technologies based on artificial intelligence. From a CIO's perspective, the AI Act is not another bureaucratic burden, but a strategic roadmap that defines the rules of the game for the coming years.
What is the AI Act and what are its goals?
The main goal of the AI Act is to ensure that artificial intelligence systems introduced and used in the EU market are safe and comply with fundamental rights and EU values. The regulation is intended to harmonize rules across the Union, eliminating legal fragmentation and creating a predictable environment for innovation. The key objectives of the legislators are:
- Protecting users and society: Ensuring that AI technologies do not violate fundamental rights, such as the right to privacy, non-discrimination, or security.
- Building trust: Creating conditions where citizens and companies can trust AI solutions, which is essential for their widespread acceptance and market development.
- Fostering innovation: Establishing clear and proportionate rules is intended to give companies legal certainty, encouraging investment and the development of ethical AI applications.
- Establishing a global standard: The EU aims for the AI Act to become a benchmark for AI legal regulations worldwide, similar to what happened with GDPR.
The regulation has extraterritorial application – it covers not only providers and users based in the EU but also entities from outside the Union if their AI systems are used on its territory.
A risk-based approach: Four levels of AI system classification
The heart of the AI Act is its risk-based approach. Instead of creating uniform, rigid rules for all AI applications, the regulation categorizes systems according to the potential threat they may pose to health, safety, and fundamental rights. This classification divides AI systems into four main levels:
- Unacceptable Risk: Systems that are considered contrary to EU values and will be completely banned.
- High Risk: Systems that can have a significant impact on people's lives and rights. They are subject to the strictest requirements.
- Limited Risk: Systems requiring compliance with specific transparency obligations.
- Minimal or No Risk: The vast majority of AI systems, which are not subject to special legal obligations.
For every CIO, it is crucial to understand which category the AI systems used or developed in their organization fall into, as this determines the scope of legal and organizational obligations.
The risk classification in the AI Act has a direct impact on the obligations that companies face. Understanding which category a given system falls into is the first and most important step in the compliance process. It is this categorization that defines the legal risk in AI projects and shapes the technology management strategy.
Unacceptable risk systems: Red lines for AI
The AI Act introduces a clear list of practices that are considered a threat to fundamental values and human rights. AI systems belonging to this category will be banned in the European Union. These include, among others:
- Social scoring systems run by public authorities that classify citizens based on their social behavior or personality traits.
- Systems that use subliminal techniques to manipulate human behavior in a way that could cause physical or psychological harm.
- Systems that exploit the vulnerabilities of specific vulnerable groups (e.g., due to age, disability) to influence their behavior.
- "Real-time" remote biometric identification systems in public spaces by law enforcement (with very narrow exceptions).
From a company's perspective, this means it is necessary to verify that none of the developed or purchased solutions fall into these categories. Although most commercial applications will not be banned, being aware of these boundaries is key to building a strategy based on ethics in AI.
High-risk systems: A key area for CIOs
This is the most important and complex category from a business standpoint. High-risk systems are those that can have a significant, negative impact on the safety, health, or fundamental rights of individuals. The AI Act divides them into two groups:
- AI systems that are components of products already subject to EU sectoral regulations (e.g., medical devices, toys, machinery, lifts).
- AI systems listed in Annex III of the regulation, covering key areas of social and economic life.
Examples of high-risk systems from Annex III that are particularly relevant for CIOs include:
- Management of critical infrastructure: e.g., systems controlling road traffic, water or energy supplies.
- Education and vocational training: e.g., algorithms that grade exams or assign candidates to educational institutions.
- Employment and workforce management: systems for candidate selection, performance evaluation, or making decisions about promotion or dismissal.
- Access to essential private and public services: e.g., systems for credit scoring.
- Justice and democratic processes.
Companies implementing or creating such systems will have to meet a number of rigorous requirements before placing them on the market. These include, among others:
- Implementing a risk management system throughout the AI lifecycle.
- Ensuring high-quality data used for training models to minimize the risk of discrimination and errors.
- Creating detailed technical documentation to allow for assessment of the system's compliance.
- Ensuring the possibility of human oversight over the system's operation.
- Achieving a high level of accuracy, robustness, and cybersecurity.
Limited and minimal risk systems: Information duties and good practices
Many popular AI applications, such as chatbots, recommendation systems, or deepfakes, have been classified as limited-risk systems. The main requirement for them is transparency. Users must be clearly informed that they are interacting with an AI system (in the case of chatbots) or that the content has been generated or modified by AI (in the case of deepfakes). The goal is to avoid manipulation and deception.
The minimal risk category, on the other hand, includes the vast majority of AI applications, such as spam filters or algorithms in video games. For these systems, the AI Act does not impose any mandatory requirements. However, providers are encouraged to voluntarily adopt codes of conduct that promote ethical and trustworthy practices.
One of the most difficult questions posed by the development of artificial intelligence is the issue of liability. When an autonomous car causes an accident, a credit algorithm unjustifiably denies a loan, or a recruitment system rejects the best candidate due to hidden biases – who is liable for the errors of artificial intelligence? The AI Act, in conjunction with the new AI Liability Directive, creates a framework aimed at providing an answer to this question.
The new AI Liability Directive: A game-changer
In parallel with the AI Act, the European Union is working on a directive to make it easier for individuals harmed by AI systems to claim compensation. The key change introduced by this draft is the rebuttable presumption of causality. In practice, this means shifting the burden of proof.
Until now, the injured party had to prove that a specific error in the AI system was the direct cause of their damage, which is extremely difficult in the case of complex algorithms. After the directive comes into force, if the injured party demonstrates that they were exposed to a high-risk AI system and suffered damage typical of that type of risk, the court will be able to presume a causal link. It will then be up to the provider or user of the AI system to prove that their technology was not at fault. This is a fundamental change that significantly increases liability for AI on the part of companies.
Who is liable for artificial intelligence errors? The chain of liability
The AI Act precisely defines the roles and responsibilities throughout the AI product lifecycle, creating a clear chain of liability. There is no single, simple answer here – liability can rest with different entities, depending on their role:
- Provider: This is the entity that develops the AI system and places it on the market. They bear the main burden of ensuring compliance with the AI Act, including conducting a risk assessment, preparing technical documentation, and implementing quality management systems. The provider is responsible for "building in" security and ethics into the product.
Check out our guide and find out what to ask a software house before signing a contract to guarantee that the delivered software meets all EU legal requirements:
Software House – How to choose and what to ask? - Importer and Distributor: Entities that place AI systems from third-country providers on the EU market. Their duty is to verify that the provider has completed all formalities and that the product has the appropriate markings (e.g., CE).
- User/Deployer: This is the company or institution that uses an AI system in its professional activities (e.g., a bank using a credit scoring algorithm). The user is obliged to use the system in accordance with its intended purpose and instructions, as well as to ensure appropriate human oversight. In the case of high-risk systems, some of the liability for the consequences of their operation may rest with the user.
Legal liability for algorithm decisions in practice
Let's imagine a scenario: a recruitment company uses an AI system to pre-screen candidates. The algorithm, due to errors in the training data, systematically rejects candidates from a specific demographic group. Who bears the legal liability for the algorithm's decisions?
- The AI system provider can be held liable if they failed to ensure adequate quality of the training data, did not conduct tests for bias, or did not provide clear documentation about the system's limitations.
- The recruitment company (user) can be held liable if it ignored the provider's warnings, failed to provide adequate human oversight of the recruitment process (e.g., by randomly verifying the algorithm's decisions), or used the system in a manner inconsistent with its intended purpose.
The new regulations aim to distribute liability in a way that reflects the real influence of each entity on the system's operation and the resulting damage.
For CIOs and IT leaders, the AI Act is not a distant future, but concrete tasks to be performed today. a proactive approach is the key to avoiding penalties, minimizing risk, and turning compliance into a competitive advantage. Here are the steps to take to effectively prepare your organization for the new regulations.
Step 1: Audit and classification of existing AI systems
The first and fundamental action is to create a comprehensive inventory of all AI systems used in the organization. You cannot manage what you do not know about. The audit should cover both systems developed internally and those supplied by external vendors. For each system, you need to answer the following questions:
- What is the business purpose of this system?
- What data is used for its training and operation?
- Who is the provider (internal team or external company)?
- What decisions are made or supported by this system?
After gathering this information, the key task is to classify each system according to the AI Act's risk-based approach (unacceptable, high, limited, minimal). This exercise will help identify priority areas and estimate the scale of adjustment work required.
Step 2: Gap analysis and implementation of requirements for high-risk systems
Once high-risk systems have been identified, a detailed gap analysis should be conducted. This involves comparing the current state of the system and its processes with the stringent requirements of the AI Act. This analysis should answer the questions:
- Do we have a risk management system for this AI system?
- Does our data quality and governance meet the new standards?
- Is the technical documentation complete and up-to-date?
- Are the human oversight mechanisms sufficient and effective?
- Is the level of cybersecurity adequate for the risk?
The results of the gap analysis will form the basis for creating a remedial action plan. This is a process that will require close cooperation between the IT department, the legal department, compliance, and business units.
Step 3: Documentation, transparency, and human oversight
Documentation is becoming one of the pillars of compliance with the AI Act. For high-risk systems, it will be necessary to maintain detailed technical documentation that describes, among other things, the system's architecture, the data used, testing processes, validation results, and foreseeable limitations. This documentation will be crucial not only for audits by supervisory authorities but also in the context of liability for AI in the event of legal disputes.
Read our article and see how reliable technical documentation reduces costs and risks in IT, facilitating potential audits and making the business resilient to employee turnover:
Technical Documentation: How to Lower IT Costs and Risk
Transparency is equally important. For limited-risk systems, mechanisms must be implemented to inform users about their interaction with AI. For high-risk systems, transparency means providing users with clear information about the system's capabilities and limitations, so they can make informed decisions and exercise effective oversight.
What does the AI Act mean for software houses and IT departments?
For development teams, both in software houses and internal IT departments, the AI Act means the need to integrate new requirements into the software development life cycle (SDLC) and Machine Learning Operations (MLOps) processes. In practice, this means:
- Security & Compliance by Design: The principles of the AI Act must be considered from the very beginning of a project, not added on at the end.
Discover proven market practices and learn how to plan data security: strategy and protection procedures to minimize the risk of leaks and vulnerabilities in newly developed algorithms:
Data Security: Effective strategy and data protection procedures - Rigorous data management: The processes of acquiring, labeling, validating, and monitoring data must be formalized and documented.
- Versioning and traceability: It will be necessary to version not only code but also models and the data they were trained on to ensure full reproducibility and auditability.
- Test automation: Implementing automated tests that check not only for performance but also for robustness, fairness, and model security.
- New roles on the team: There may be a need for AI ethics specialists, algorithm auditors, or experts in AI Act compliance.
Understanding how to prepare a company for the AI Act is a process that requires strategic planning and commitment at the highest level of the organization.
Although the AI Act codifies many principles in the form of hard law, truly responsible implementation of artificial intelligence goes beyond mere regulatory compliance. Building an organizational culture based on ethics in AI is an investment in the long-term trust of customers, partners, and employees. Legal regulations set the minimum, while ethics points the way towards excellence.
For CIOs, this means promoting a discussion within the organization about the values that should guide the creation and use of technology. Issues such as algorithmic fairness, avoiding discrimination, model transparency, and responsibility for the social impact of technology should become an integral part of every AI project. Companies that treat ethics in AI as part of their strategy, not just a checklist to be ticked off, will gain not only legal compliance but also a reputation as a leader that can be trusted in the era of digital transformation.
The arrival of the AI Act marks a turning point in the development and implementation of artificial intelligence. It is the end of the "wild west" era and the beginning of an age of structured accountability and transparency. For CIOs and the entire organization, the new AI legal regulations are not a threat to innovation, but a framework that allows for its safe and sustainable development. The key to success is a proactive approach: auditing existing systems, a thorough analysis of requirements for high-risk solutions, and integrating the principles of ethics and compliance throughout the technology lifecycle.
Issues such as liability for AI and a clear understanding of who is liable for the errors of artificial intelligence are becoming a central element of technological risk management. Companies that start implementing the right processes and building a culture based on ethics in AI today will not only ensure their legal compliance but will also build a solid foundation of trust, which will prove to be the most valuable business asset in the coming years.