The EU AI Act introduces concrete obligations for organizations that develop, integrate, place on the market, or put AI systems into service in the EU. For technical teams, this means that compliance is not just a legal or policy matter. It affects system design, data practices, testing, documentation, release decisions, logging, monitoring, human oversight, and the evidence needed to show that the system meets the applicable requirements.
Many developers know the headlines of the AI Act, but not yet what it means in day-to-day engineering work. Which systems are prohibited? When is a system high-risk? What technical documentation is actually required? What must be logged, tested, reviewed, and monitored? What has to be built into the system, and what has to be shown to regulators, customers, or internal compliance teams? The Act’s core high-risk requirements cover risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity, and quality management. Providers have the main compliance burden, while deployers and importers/distributors have their own duties, and general-purpose AI model providers now also face separate obligations.
In this one-day workshop, AIQ translates the AI Act into concrete engineering tasks and lifecycle artifacts. Participants learn how to move from legal text to implementation work: classify a use case, identify the applicable role, determine whether the system is prohibited, high-risk, or subject mainly to transparency duties, and map the legal requirements into technical controls, documentation, validation evidence, and operational processes. The focus is practical: what developers, ML engineers, technical leads, QA specialists, and product owners need to produce so that an AI system is not only functional, but also defensible under the Act. The high-risk chapter of the Act sets out the core provider requirements in Articles 8–15, while provider obligations are centered in Article 16 and related provisions.
This training is designed for software developers, ML/AI engineers, data scientists, QA and test engineers, architects, technical product managers, and compliance-facing technical leads who work on AI-enabled products or internal systems. It is particularly useful for teams that need to understand where compliance affects model development, system architecture, evaluation, release management, post-market monitoring, and customer-facing documentation. It is also relevant for teams building on top of third-party models, since role allocation and downstream obligations matter greatly under the Act, especially for providers of GPAI models and downstream AI-system providers.