Co-founder and CEO at 10 Senses
AI Act Regulation
The emergence of the AI Act in August 2024 marked a turning point for the market using artificial intelligence. The regulation is already formally in force, and its key requirements are expected to start applying from August 2026—although there is increasing discussion today about a possible postponement of full enforcement, potentially until December 2027, among other reasons due to the planned EU Digital Omnibus. Regardless of the timeline, one thing is certain: the AI Act is now a reality.
The current market situation is a mix of uncertainty and concern. Companies know that use cases classified as high-risk will emerge, but they are often not organizationally or process-wise prepared for them. There is a lack of clarity on how to operationalize regulatory requirements in practice and who within the organization should be responsible for what. Additionally, the AI Act is sometimes perceived as synonymous with heavy documentation and “paperwork” that could slow down innovation. This is exactly where we are today — caught between the necessity to comply with regulation and the fear that doing so will become a significant business burden.
AI Act Requirements
Within the AI Act, high-risk systems serve as the key reference point. It is for these systems that the regulation introduces the most extensive set of requirements, and they best illustrate how the EU regulator envisions a “mature” AI deployment. Even if a given system is not formally classified as high-risk, the requirements from this category constitute a practical benchmark for the entire market.
These requirements can be divided into two complementary streams. The first concerns the organization and its processes, while the second relates to the AI system itself and its architecture. Without combining both perspectives, implementing the AI Act in practice is not possible.
Procedural implementation. From a procedural standpoint, the AI Act primarily enforces a conscious approach to classifying AI systems and clearly defining roles across the value chain. This is accompanied by the need to establish a risk and quality management system covering the entire AI system lifecycle, not just the moment of deployment. Achieving this requires genuine collaboration between business teams, technical teams, compliance, and risk management, as well as translating regulatory requirements into concrete operational procedures. Different obligations apply to system providers, distributors, and entities deploying AI systems.
Technical implementation. The second stream focuses on the AI system itself. The AI Act expects high-risk systems to be safe, ethical, and to operate within clearly defined boundaries—which in practice means implementing guardrails and mechanisms to control model behaviour. Equally important is quality: of data, models, and outputs, including the ability to test, validate, and continuously evaluate performance. The regulation also requires maintaining documentation, registers, and logs that enable monitoring system behaviour and reconstructing its decisions. In some cases—under the AI Act—it is also necessary to explain how the system works to a customer or a regulator.
As a result, the requirements for high-risk systems form a coherent picture: the AI Act does not merely describe legal obligations, but a model for mature AI deployment in which organization and technology must work together.
Current Good Practices in Developing AI
A well-designed AI system must take into account one fundamental characteristic of this technology: AI operates probabilistically, and its behaviour is not fully deterministic. In practice, this means that even if a system “works correctly,” without additional control mechanisms we cannot be truly certain how and why it behaves in a particular way. It can hallucinate. That is why, regardless of regulation, teams building AI-based products and services already strive to maintain as much control as possible over how a system behaves over time.
This need quickly becomes evident in everyday work on an AI system. Simply swapping the underlying language model, modifying prompts, changing an architectural component, updating a dataset, or applying the system to a very similar but slightly different business use case can all have an impact. Each such change may affect response quality, system stability, safety, or communication style. Without appropriate tools, it can be difficult to even notice that something has changed—let alone assess whether the change was for the better.
Evaluations. That is why mature AI deployments rely on systematic evaluations. A set of metrics, tests, and scenarios makes it possible to capture the impact of each modification: whether overall quality has improved, response time has decreased, resistance to undesirable behaviour has increased, or safety mechanisms have become more effective. Such system of evaluation becomes a reference point that makes changes in the system measurable rather than intuitive.
Reporting. This has enormous business significance. Decisions about deploying a system to production, approving changes to a model, or continuing to invest in a project cannot be based on the impression that “the AI seems to work better than before.” A business owner or product owner needs hard data. An evaluation report provides the confidence to make decisions based on facts rather than gut feeling.
A well-implemented AI system also includes guardrails—mechanisms that filter dangerous or unethical content both at the input and output of the system. It also involves quality control of responses: their accuracy, style, and consistency. This is complemented by monitoring and access to logs, which enable incident analysis and rapid troubleshooting. Taken together, all these elements form a picture of AI over which an organization has real, not illusory, control.
Convergence of Current AI Systems and High-Risk Systems under the AI Act
The AI Act, and particularly the category of high-risk systems, often raises concerns and is perceived as complex and costly. From an AI architecture perspective, however, a clear convergence is visible: technical requirements for high-risk systems largely overlap with best practices for building modern, responsible AI systems. Control mechanisms, evaluations, guardrails, monitoring, and documentation are not merely regulatory requirements—they are natural components of a well-designed product. Paradoxically, organizations that build even “simpler” AI systems in a mature and thoughtful way already meet a significant portion of the technical requirements envisaged for high-risk systems. Therefore, instead of fearing the AI Act, it is worth treating it as confirmation of a direction already dictated by sound business judgment.