Registration

The EU AI Act: What AI Developers Need to Build, Document and Prove

The EU AI Act introduces concrete obligations for organizations that develop, integrate, place on the market, or put AI systems into service in the EU. For technical teams, this means that compliance is not just a legal or policy matter. It affects system design, data practices, testing, documentation, release decisions, logging, monitoring, human oversight, and the evidence needed to show that the system meets the applicable requirements.

Many developers know the headlines of the AI Act, but not yet what it means in day-to-day engineering work. Which systems are prohibited? When is a system high-risk? What technical documentation is actually required? What must be logged, tested, reviewed, and monitored? What has to be built into the system, and what has to be shown to regulators, customers, or internal compliance teams? The Act’s core high-risk requirements cover risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity, and quality management. Providers have the main compliance burden, while deployers and importers/distributors have their own duties, and general-purpose AI model providers now also face separate obligations. 

In this one-day workshop, AIQ translates the AI Act into concrete engineering tasks and lifecycle artifacts. Participants learn how to move from legal text to implementation work: classify a use case, identify the applicable role, determine whether the system is prohibited, high-risk, or subject mainly to transparency duties, and map the legal requirements into technical controls, documentation, validation evidence, and operational processes. The focus is practical: what developers, ML engineers, technical leads, QA specialists, and product owners need to produce so that an AI system is not only functional, but also defensible under the Act. The high-risk chapter of the Act sets out the core provider requirements in Articles 8–15, while provider obligations are centered in Article 16 and related provisions. 

This training is designed for software developers, ML/AI engineers, data scientists, QA and test engineers, architects, technical product managers, and compliance-facing technical leads who work on AI-enabled products or internal systems. It is particularly useful for teams that need to understand where compliance affects model development, system architecture, evaluation, release management, post-market monitoring, and customer-facing documentation. It is also relevant for teams building on top of third-party models, since role allocation and downstream obligations matter greatly under the Act, especially for providers of GPAI models and downstream AI-system providers. 

 


Participants will learn to: 

  • Determine whether a use case is prohibited, high-risk, limited-risk, or outside the core AI Act obligations

  • Identify whether their organization acts as provider, deployer, importer, distributor, or downstream integrator

  • Translate Articles 8–15 into concrete engineering tasks and deliverables

  • Build the documentation and evidence expected for high-risk AI systems

  • Understand what needs to be tested, logged, monitored, and reviewed before and after release

  • Distinguish system-level AI Act obligations from GPAI model obligations

  • Structure a practical compliance workflow that technical teams can actually execute


Agenda

10:00  

Welcome & Introduction 

  • Why the AI Act matters for engineering teams
  • The compliance lifecycle: from use case to release and post-market monitoring
  • How legal obligations become technical work items

10:20 

Scope, roles and classification

  •  What counts as an AI system under the Act

  • Roles in the value chain: provider, deployer, importer, distributor, authorized representative

  • Prohibited practices vs high-risk systems vs transparency obligations

  • How Annex III high-risk categories affect product teams

11:00 

High-risk AI requirements: what developers  need to build

  • Risk management system

  • Data and data governance expectations

  • Technical documentation and record-keeping

  • Logging capabilities and traceability

  • Transparency and instructions for use

  • Human oversight mechanisms

  • Accuracy, robustness, and cybersecurity 

11:45 

From legal text to engineering artifacts

  •  Turning requirements into backlog items and release gates

  • What belongs in the technical file

  • Mapping risk controls to evidence

  • System boundaries, intended purpose, assumptions, and foreseeable misuse

  • How to prepare a defensible “compliance by design” package

12:30 

Lunch break 

13:00 

Provider obligations and lifecycle accountability

  •  What the provider is responsible for before placing a system on the market or putting it into service

  • Quality management system expectations

  • Conformity assessment logic

  • Corrective actions, incident handling, and information duties

  • Post-market monitoring as an engineering responsibility, not just a legal one

14:00 

Working with third-party models and GPAI

  • When you are “just using an LLM” and when you are still a provider of an AI system

  • What downstream developers need from GPAI providers

  • GPAI model obligations: technical documentation, copyright policy, and training-data summary

  • Additional obligations for GPAI models with systemic risk

15:00 

Coffee break  

15:15 

Testing, evidence and release readiness  

  • What is not enough: “the model seems to work”

  • Validation and test evidence for compliance-relevant claims

  • Robustness, edge cases, misuse, and failure mode testing

  • Human oversight checks and operational controls

  • Documentation of residual risk and known limitations

16:15 

Post-market obligations and operational reality 

  • Monitoring after deployment

  • Logging, incident escalation, and change management

  • When updates may require reassessment

  • Customer feedback, complaints, and field performance as compliance inputs

  • Building a workable operating model between engineering, product, legal, and compliance

16:45 

Practical exercise: classify, map, implement 

  •  Classify a realistic AI use case

  • Identify the role of each actor in the chain

  • Derive the applicable obligations

  • Define the engineering artifacts, controls, and evidence needed for release

17:00 

End of the workshop 


The presentation language is English.
​​​​​​​

Trainer 

Dr. Simone Amoroso is Head of Technology at AIQ, where he works on engineering approaches for trustworthy, testable and secure AI systems. With a background in experimental high-energy physics (PhD) and extensive research experience, he brings a strong quantitative mindset to AI development, safety and evaluation. 

Registration

Participating person

Select ticket

Receive further information

About us

To improve the quality of AI systems and make it transparently verifiable, the State of Hesse and the VDE Verband der Elektrotechnik Elektronik Informationstechnik e. V. founded the AI Quality & Testing Hub GmbH (AIQ). AIQ enables companies and organizations to develop, validate, and continuously enhance quality characteristics of artificial intelligence (AI). This ensures that AI applications are trustworthy – and that innovations reliably reach the market.

Data protection

Information on how AI Quality & Testing Hub GmbH handles your personal data, the purposes for which your data is processed, the legal bases for processing, and your rights can be found at AIQ privacy policy.

Registration for this event is carried out via the service provider eveeno, acting as a data processor. You can find eveeno’s privacy policy at https://eveeno.com/de/privacy.

Terms of participation

The event is limited for 5 to 20 participants. If the minimum number of participants is not reached one week before the event begins, we reserve the right to cancel it. For organizational reasons, registration for our events typically closes 7 days before the event date. Thank you for your understanding.

Please note the General Terms and Conditions for events and newsletters of AI Quality & Testing Hub GmbH (AIQ).

Payment processing is handled by eveeno.