| |
| What makes AI risk different Common failure modes: unsafe content, hallucinations, privacy leaks, security abuse, overreliance
|
| From use case to risk model |
| Lightweight risk assesment & prioritization |
| |
| Control design: How to mitigate AI risks Safety/UX patterns: friction, confirmations, refusals, escalation to humans, safe defaults
Security & data governance controls: access boundaries, prompt injection defenses, PII handling, logging constraints
|
| Safety testing and evidence plan Safety evaluation techniques: red-teaming scripts, adversarial prompt suites, prompt-injection tests, regression sets
|
| |
| Deployment risk plan: monitoring, escalation, incident response What to monitor: safety signals, abuse signals, drift indicators, UX complaints, cost/latency anomalies
|
| |
| Communicating to Stakeholders: AI system cards |
| |