Registration

AI Risk Lifecycle in Practice: End-to-End Risk Management for AI Systems 

AI systems can fail in ways that classical software risk methods do not fully capture. Models learn from data, behave probabilistically, and interact with humans and organizations in complex, real-world contexts. Risks can emerge from data, model behavior, user interaction, security exposure, operational drift, or unclear ownership, not just from “bad accuracy.” 

In this one-day (online) workshop, AIQ shows how to identify, structure, and manage AI risks across the full lifecycle: from idea and data sourcing to deployment, monitoring, and incident response. Through guided hands-on exercises, participants learn to build practical artifacts (risk file, control plan, governance + ops hooks) tailored to real-world projects. Participants learn how to decide which risks require controls, why, who owns them, what evidence is required, and what to do when failures occur.  

This training is designed for professionals involved in building, integrating, or overseeing AI systems. It serves software developers, ML/AI engineers, and data scientists who work directly with AI technologies, as well as QA and test engineers or technical leads who require not only metrics but also clear risk structures and ownership models. It also supports product managers, security specialists, and experts in data governance and compliance - those responsible for translating identified risks into informed organizational decisions. 


Participants will learn to: 

  • Translate a use case into a risk model 

  • Run a lightweight but defensible risk assessment 

  • Build an AI Risk Register with ownership and evidence 

  • Select risk controls beyond testing 

  • Design a deployment risk plan 

  • Communicate risk clearly to stakeholders 

  • Produce a concise AI System Card 


Agenda

10:00  

Welcome & Introduction 

10:15 

What makes AI risk different

  • How AI risk differs from classical software risk (data dependence, probabilistic behavior, misuse) 

  • Common failure modes: unsafe content, hallucinations, privacy leaks, security abuse, overreliance 

  • The “frameworks & standards landscape” and how it maps to our artifacts (taxonomy, traceability, evidence) 

10:50 

From use case to risk model 

  • Intended use / not intended use, stakeholders, and system boundaries 

  • Harm pathways + misuse scenarios (what can go wrong, how it causes harm, who is affected) 

  • Capture assumptions/limits and dependencies (data, tools, sources, human oversight) 

11:45 

Lightweight risk assesment & prioritization 

  • Turn “concerns” into defined risk statements (cause → failure mode → harm) 

  • Impact × likelihood scoring, plus uncertainty markers (what we don’t know yet) 

  • Risk appetite thresholds and prioritization (which risks must be controlled before release) 

12:30 

Lunch break 

13:00 

Control design: How to mitigate AI risks 

  • Control types: preventive vs detective vs corrective; technical vs organizational controls 

  • Safety/UX patterns: friction, confirmations, refusals, escalation to humans, safe defaults 

  • Security & data governance controls: access boundaries, prompt injection defenses, PII handling, logging constraints 

14:00 

Safety testing and evidence plan 

  • Define “evidence” per control: what proves it works and what’s acceptable rigor for the context 

  • Safety evaluation techniques: red-teaming scripts, adversarial prompt suites, prompt-injection tests, regression sets 

  • Test harness structure and reporting: scorecards, thresholds, and how to keep it maintainable over time 

15:15 

Coffee break  

15:30 

Deployment risk plan: monitoring, escalation, incident response 

  • What to monitor: safety signals, abuse signals, drift indicators, UX complaints, cost/latency anomalies 

  • Alert thresholds, escalation paths, and operational roles (who wakes up, who decides) 

  • Rollback/kill-switch criteria and re-approval triggers (what changes require re-assessment) 

16:30 

Governance and Ownership 

  • Ownership model (RACI-lite): who owns risks, controls, evidence, and operational response 

  • Release gates: what must be true before launch and what can be monitored post-launch 

  • Traceability: linking risk → control → evidence → monitoring for auditability and clarity 

16:50 

Communicating to Stakeholders: AI system cards 

  • Create the one-page AI System Card: intended use, limits, residual risk, operational commitments 

  • Stakeholder readout: how to communicate risk without jargon or false certainty 

17:00 

End of the workshop 


The presentation language is English.
​​​​​​​

Trainer 

Dr. Simone Amoroso is Head of Technology at AIQ, where he works on engineering approaches for trustworthy, testable and secure AI systems. With a background in experimental high-energy physics (PhD) and extensive research experience, he brings a strong quantitative mindset to AI development, safety and evaluation. 

Registration

Participating person

Select ticket

Receive further information

Data protection

Information on how AI Quality & Testing Hub GmbH handles your personal data, the purposes for which your data is processed, the legal bases for processing, and your rights can be found at AIQ privacy policy.

Registration for this event is carried out via the service provider eveeno, acting as a data processor. You can find eveeno’s privacy policy at https://eveeno.com/de/privacy.

Terms of participation

Payment processing is handled by eveeno.