AI Risk Assessment Checklist (AIRAC)

Ship responsibly, on schedule.

Meet AIRAC! A fast, auditable AI Risk Assessment Checklist aligned to the RAISEF pillars and lifecycle. Spot risks earlier, gather evidence, and make clear go/-no-go calls for both GenAI and classic ML.

What You Get

  • Aligned with RAISEF: Every checklist item maps to Ethical Safeguards, Operational Integrity, and Societal Empowerment so governance isn’t an afterthought.
  • Evidence you can trust: Record L1–L4 evidence quality, named R/A sign-offs, and exportable decision logs for auditability.
  • GenAI-ready: Built-in checks for hallucination/factuality, prompt/RAG risks, jailbreak/injection, and provenance/watermarking.

Key Features

  • Quick Start flow: Set context → name owners → attach evidence → decide → publish.
  • Risk scales & gates: With clear thresholds and Accept / Conditional Accept / Reject outcomes.
  • Extensive coverage: Across testing, security, privacy, fairness, safety, and monitoring — with lifecycle gates from design to decommissioning.