Case Study 1: AI-Driven Healthcare Diagnostics
| Ref. | Resource | Source | URL |
|---|---|---|---|
| 1 | General-use Responsible AI and Risk Management Frameworks | ||
| 1.1 | Government, Regulatory & Standards | ||
| 1.1.1 | AI Risk Management Framework (AI RMF) | National Institute of Standards and Technology (NIST) | https://www.nist.gov/itl/ai-risk-management-framework |
| 1.1.2 | NIST AI RMF Playbook | National Institute of Standards and Technology (NIST) | https://airc.nist.gov/airmf-resources/playbook |
| 1.1.3 | ISO/IEC 23894:2023: Information technology – Artificial intelligence – Guidance on risk management | International Organization for Standardization (ISO) | https://www.iso.org/standard/77304.html |
| 1.1.4 | Model Artificial Intelligence Governance Framework (2nd Edition) | Infocomm Media Development Authority and Personal Data Protection Commission (PDPC) Singapore | https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf |
| 1.1.5 | Guidance on the Ethical Development and Use of Artificial Intelligence | Office of the Privacy Commissioner of Personal Data, Hong Kong | https://www.pcpd.org.hk/english/resources_centre/publications/files/guidance_ethical_e.pdf |
| 1.1.6 | Australia’s AI Ethics Principles | Australian Government – Department of Industry, Science and Resources | https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles?utm_source=StartupNewsAsia |
| 1.1.7 | Data Protection Audit Framework | Artificial Intelligence Toolkit | UK Information Commissioner’s Office | https://ico.org.uk/for-organisations/advice-and-services/audits/data-protection-audit-framework/toolkits/artificial-intelligence/ |
| 1.1.8 | Artificial Intelligence | Federal Office for Information Security | https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Informationen-und-Empfehlungen/Kuenstliche-Intelligenz/kuenstliche-intelligenz_node.html |
| 1.2 | Corporate | ||
| 1.2.1 | Microsoft Responsible AI Standard, v2 – General Requirements | Microsoft (~221,000 employees) | https://msblogs.thesourcemediaassets.com/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf |
| 1.2.10 | AI Safety and Security | Frontier Model Forum | https://www.frontiermodelforum.org/about-us |
| 1.2.2 | Our AI Principles | Google (Alphabet, ~190,234 employees) | https://ai.google/principles |
| 1.2.3 | AI Ethics Maturity Model | Salesforce (70,000+ employees) | https://www.salesforceairesearch.com/static/ethics/EthicalAIMaturityModel.pdf |
| 1.2.4 | The Aletheia Framework 2.0 | Rolls-Royce (50,000+ employees) | https://www.rolls-royce.com/~/media/Files/R/Rolls-Royce/documents/stand-alone-pages/aletheia-framework-booklet-2021.pdf |
| 1.2.5 | From Principles to Practice – An interdisciplinary framework to operationalise AI ethics | AI Ethics Impact Group led by VDE and BertelsmannStiftung (85,000 employees) | https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig---report---download-hb-data.pdf |
| 1.2.6 | AI Ethics Framework | Digital Catapult | https://www.digicatapult.org.uk/wp-content/uploads/2023/06/DC_AI_Ethics_Framework-2021.pdf |
| 1.2.7a | AI Ethics Principles | Samsung Electronics (260,000+ employees) | https://www.samsung.com/global/sustainability/policy-file/AZEqcluaBekALYM9/Samsung_Electronics_AI_Ethics_EN.pdf |
| 1.2.7b | AI Safety Framework | Samsung Electronics (260,000+ employees) | https://www.samsung.com/global/sustainability/policy-file/AZTUlveqAMoALYMV/Samsung_Electronics_AI_Safety_Framework_en.pdf |
| 1.2.8 | Making AI Inclusive – Four Guiding Principles for Ethical Engagement | Partnership on AI | https://partnershiponai.org/wp-content/uploads/dlm_uploads/2022/07/PAI_whitepaper_making-ai-inclusive.pdf |
| 1.2.9 | Responsible Use of AI Guide | Amazon Web Services (AWS) | https://d1.awsstatic.com/products/generative-ai/responsbile-ai/AWS-Responsible-Use-of-AI-Guide-Final.pdf |
| 2 | Concept-based Frameworks | ||
| 2.1 | Accountability | ||
| 2.1.1 | Raji, I. D., et al. (2020). Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing | https://dl.acm.org/doi/pdf/10.1145/3351095.3372873 | |
| 2.1.2 | Cobbe, J. et al. (2021) Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems | https://dl.acm.org/doi/pdf/10.1145/3442188.3445921 | |
| 2.1.3 | Guidance on the AI Auditing Framework – Draft guidance for consultation | UK Information Commissioner’s Office | https://ico.org.uk/media2/migrated/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf |
| 2.1.4 | Artificial Intelligence – An Accountability Framework for Federal Agencies and Other Entities | US Government Accountability Office (GAO) | https://www.gao.gov/assets/gao-21-519sp.pdf |
| 2.10 | Bias, Fairness, and Equity | ||
| 2.10.1 | Bender, E. M., et al. (2018) Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00041/43452/Data-Statements-for-Natural-Language-Processing | |
| 2.10.2 | Advancing Data Equity: An Action-Oriented Framework | World Economic Forum | https://www3.weforum.org/docs/WEF_Advancing_Data_Equity_2024.pdf |
| 2.11 | Definitions, Terminology, and Classification | ||
| 2.11.1 | OECD Framework for the Classification of AI Systems | Organisation for Economic Co-operation and Development (OECD) | https://www.oecd.org/content/dam/oecd/en/publications/reports/2022/02/oecd-framework-for-the-classification-of-ai-systems_336a8b57/cb6d9eca-en.pdf |
| 2.11.2 | IEEE P3123 – Standard for Artificial Intelligence and Machine Learning (AI/ML) Terminology and Data Formats | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/3123/10744 |
| 2.11.3 | IEEE 2802-2022 – IEEE Standard for Performance and Safety Evaluation of Artificial Intelligence Based Medical Devices: Terminology | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2802/7460 |
| 2.11.4 | ETSI TS 104 050 V1.1.1 (2025-03) – Securing Artificial Intelligence (SAI); AI Threat Ontology and definitions | European Telecommunications Standards Institute (ETSI) | https://www.etsi.org/deliver/etsi_ts/104000_104099/104050/01.01.01_60/ts_104050v010101p.pdf |
| 2.11.5 | ISO 8000-2:2022 – Data quality – Part 2: Vocabulary | International Organization for Standardization (ISO) | https://www.iso.org/standard/85032.html |
| 2.11.6 | ANSI/CTA-2089.1-2020 – Definitions/Characteristics of Artificial Intelligence in Health Care | American National Standards Institute (ANSI) | https://webstore.ansi.org/standards/ansi/ansicta20892020 |
| 2.12 | Design, UI, and UX | ||
| 2.12.1 | Weisz, J. D., et al. (2024). Design Principles for Generative AI Applications | https://arxiv.org/pdf/2401.14484 | |
| 2.12.2 | Lee, K. (2025). Towards a Working Definition of Designing Generative User Interfaces | https://arxiv.org/pdf/2505.15049 | |
| 2.13 | Embodied AI and Robotics | ||
| 2.13.1 | Rachwal, K., et al. (2025). RAI: Flexible Agent Framework for Embodied AI | https://arxiv.org/abs/2505.07532 | |
| 2.14 | Environmental Impact | ||
| 2.14.1 | NWIP TR – Green and sustainable AI (N 256) | The British Standards Institution (BSI) | https://standardsdevelopment.bsigroup.com/projects/9022-07691 |
| 2.15 | Explainability | ||
| 2.15.1 | AI Explainability in Practice – Facilitator Workbook | The Alan Turing Institute | https://www.turing.ac.uk/sites/default/files/2024-06/aieg-ati-7-explainabilityv1.2.pdf |
| 2.15.2 | Explaining Decisions Made with AI | UK Information Commissioner’s Office | https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence |
| 2.15.3 | Information Technology – Artificial Intelligence – Machine Learning (ML) model transparency | Advanced Technology Academic Research Center (ATARC) | https://atarc.org/project/information-technology-artificial-intelligence-machine-learning-ml-model-transparency |
| 2.15.4 | IEEE P2976 – Standard for XAI - eXplainable Artificial Intelligence - for Achieving Clarity and Interoperability of AI Systems Design | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2976/10522 |
| 2.15.5 | IEEE 2894-2024 – IEEE Guide for an Architectural Framework for Explainable Artificial Intelligence | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2894/11296 |
| 2.15.6 | Chen, Y., et al. (2025). Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models | https://arxiv.org/pdf/2503.14521 | |
| 2.16 | Fairness and Bias | ||
| 2.16.1 | ISO/IEC TR 24027:2021 – Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making | International Organization for Standardization (ISO) | https://www.iso.org/standard/77607.html |
| 2.16.2 | Towards a Standard for Identifying and Managing Bias in Artificial Intelligence | National Institute of Standards and Technology (NIST) | https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf |
| 2.16.3 | Confronting Bias: BSA’s Framework to Build Trust in AI | The Software Alliance (BSA) | https://www.nist.gov/system/files/documents/2021/08/23/ai-rmf-rfi-0045.pdf |
| 2.16.4 | Purpose, Process, and Monitoring: A New Framework for Auditing Algorithmic Bias in Housing and Lending | National Fair Housing Alliance (NFHA) | https://nationalfairhousing.org/wp-content/uploads/2022/02/PPM_Framework_02_17_2022.pdf |
| 2.16.5 | ISO/IEC TS 12791:2024 – Information technology — Artificial intelligence — Treatment of unwanted bias in classification and regression machine learning tasks | International Organization for Standardization (ISO) | https://www.iso.org/standard/84110.html |
| 2.17 | High Impact Risk | ||
| 2.17.1 | Preparedness Framework (Beta) | OpenAI | https://cdn.openai.com/openai-preparedness-framework-beta.pdf |
| 2.17.2 | Frontier AI Framework | Meta | https://ai.meta.com/static-resource/meta-frontier-ai-framework |
| 2.17.3 | Responsible Scaling Policy (Version 2.1) | Anthropic | https://www-cdn.anthropic.com/17310f6d70ae5627f55313ed067afc1a762a4068.pdf |
| 2.18 | Human-Computer Interaction | ||
| 2.18.1 | Collaborations Between People and AI Systems (CPAIS): Human-AI Collaboration Framework and Case Studies | Partnership on AI (PAI) | https://partnershiponai.org/wp-content/uploads/2021/08/CPAIS-Framework-and-Case-Studies-9-23.pdf |
| 2.18.2 | IEEE P7008 –Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems (Active PAR) | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/7008/7095 |
| 2.18.3 | IEEE 7014-2024 – IEEE Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/7014/7648 |
| 2.18.4 | IEEE 3128-2025 – IEEE Recommended Practice for the Evaluation of Artificial Intelligence (AI) Dialogue System Capabilities | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/3128/10746 |
| 2.18.5 | IEEE 7010-2020 – IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/7010/7718 |
| 2.19 | Impact Assessments | ||
| 2.19.1 | The AI Impact Navigator – A guide for leaders to steward AI impact — socially, environmentally, and economically. | Australian Government – Department of Industry, Science and Resources – National Artificial Intelligence Centre | https://www.industry.gov.au/sites/default/files/2024-10/ai-impact-navigator-interactive.pdf |
| 2.19.2 | ISO/IEC 42005:2025 -- Information technology — Artificial intelligence (AI) — AI system impact assessment | International Organization for Standardization (ISO) | https://www.iso.org/standard/42005 |
| 2.19.3 | Artificial Intelligence Impact Assessment | ECP | Platform for the Information Society | https://ecp.nl/wp-content/uploads/2019/01/Artificial-Intelligence-Impact-Assessment-English.pdf |
| 2.19.4 | Human Rights AI Impact Assessment | Law Commission of Ontario (LCO) & Ontario Human Rights Commission | https://www3.ohrc.on.ca/sites/default/files/Human%20Rights%20Impact%20Assessment%20for%20AI.pdf |
| 2.2 | Benchmarking and Performance | ||
| 2.2.1 | ISO/IEC DTS 42119-2: Information technology – Artificial intelligence – Testing of AI – Part 2: Overview of testing AI systems | International Organization for Standardization (ISO) | https://www.iso.org/standard/84127.html |
| 2.2.2 | ISO/IEC NP TS 12831: Information technology – Artificial intelligence – Testing for AI Systems | The British Standards Institution (BSI) | https://standardsdevelopment.bsigroup.com/projects/9021-06406 |
| 2.2.3 | IEEE 2937-2022: IEEE Standard for Performance Benchmarking for Artificial Intelligence Server Systems | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2937/10376 |
| 2.2.4 | IEEE 2801-2022: IEEE Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2801/7459 |
| 2.20 | Incident Response | ||
| 2.20.1 | Deployment Corrections – An incident response framework for frontier AI models | Institute for AI Policy and Strategy (IAPS) | https://arxiv.org/pdf/2310.00328 |
| 2.21 | Licensing | ||
| 2.21.1 | IEEE 2840-2024 – IEEE Standard for Responsible AI Licensing | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2840/7673 |
| 2.22 | Robustness | ||
| 2.22.1 | IEEE 3129-2023 – IEEE Standard for Robustness Testing and Evaluation of Artificial Intelligence (AI)-based Image Recognition Service | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/3129/10747 |
| 2.22.2 | ISO/IEC TR 24029-1:2021 – Artificial Intelligence (AI) — Assessment of the robustness of neural networks – Part 1: Overview | International Organization for Standardization (ISO) | https://www.iso.org/standard/77609.html |
| 2.23 | System Management | ||
| 2.23.1 | ISO/IEC 42001:2023 – Information technology — Artificial intelligence — Management system | International Organization for Standardization (ISO) | https://www.iso.org/standard/42001 |
| 2.23.2 | Smith, C. J. (2019). Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development | https://arxiv.org/ftp/arxiv/papers/1910/1910.03515.pdf | |
| 2.23.3 | ISO/IEC TS 25058:2024 – Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Guidance for quality evaluation of artificial intelligence (AI) systems | International Organization for Standardization (ISO) | |
| 2.23.4 | ISO/IEC 5338:2023 – Information technology — Artificial intelligence — AI system life cycle processes | International Organization for Standardization (ISO) | https://www.iso.org/standard/81118.html |
| 2.23.5 | ISO/IEC 38507:2022 -- Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations | International Organization for Standardization (ISO) | https://www.iso.org/standard/56641.html |
| 2.24 | Transparency | ||
| 2.24.1 | ISO/IEC FDIS 12792 – Information technology — Artificial intelligence — Transparency taxonomy of AI systems | International Organization for Standardization (ISO) | https://www.iso.org/standard/84111.html |
| 2.24.2 | IEEE 7001-2021 – IEEE Standard for Transparency of Autonomous Systems | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/7001/6929 |
| 2.24.3 | Bennet, K., et al. (2024). Implementing AI Bill of Materials (AI BOM) with SPDX 3.0 | https://arxiv.org/pdf/2504.16743 | |
| 2.25 | Trustworthiness | ||
| 2.25.1 | Ethics guidelines for trustworthy AI | European Commission | https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai |
| 2.25.2 | Baker-Brunnbauer, J. (2021). TAII Framework for Trustworthy AI Systems | https://journal.robonomics.science/index.php/rj/article/view/17/6 | |
| 2.25.3 | ANSI/CTA-2090 – The Use of Artificial Intelligence in Health Care: Trustworthiness | American National Standards Institute (ANSI) | https://shop.cta.tech/products/cta-2090 |
| 2.25.4 | ISO/IEC TR 24028:2020 – Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence | International Organization for Standardization (ISO) | https://www.iso.org/standard/77608.html |
| 2.25.5 | ANSI/CTA 2096 – Guidelines for Developing Trustworthy AI Systems | American National Standards Institute (ANSI) | https://shop.cta.tech/products/cta-2096 |
| 2.25.6 | VDE-AR-E 2842-61-6 Anwendungsregel:2021-06 – Development and trustworthiness of autonomous/cognitive systems | VDE VERLAG GmbH | https://www.vde-verlag.de/standards/0800732/vde-ar-e-2842-61-6-anwendungsregel-2021-06.html |
| 2.26 | Validation | ||
| 2.26.1 | BS 30440:2023 – Validation framework for the use of artificial intelligence (AI) within healthcare. Specification | The British Standards Institution (BSI) | https://standardsdevelopment.bsigroup.com/projects/2021-00605#/section |
| 2.26.2 | ISO/IEC DTS 42119-3 – Artificial intelligence — Testing of AI – Part 3: Verification and validation analysis of AI systems (under development) | International Organization for Standardization (ISO) | https://www.iso.org/standard/85072.html |
| 2.3 | Agentic Systems | ||
| 2.3.1 | Lanus, E., et al. (2021). Test and Evaluation Framework for Multi-Agent Systems of Autonomous Intelligent Agents | https://doi.org/10.1109/SOSE52739.2021.9497472 | |
| 2.3.2 | Responsible bots: 10 guidelines for developers of conversational AI | Microsoft (~221,000 employees) | https://www.microsoft.com/en-us/research/wp-content/uploads/2018/11/Bot_Guidelines_Nov_2018.pdf |
| 2.3.3 | Shavit, Y., et al. (n.d.). Practices for Governing Agentic AI Systems | https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf | |
| 2.3.4 | Chan, A., et al. (2024). Visibility into AI Agents | https://arxiv.org/pdf/2401.13138 | |
| 2.3.5 | AI Coding Assistants | Federal Office for Information Security | https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/ANSSI_BSI_AI_Coding_Assistants.pdf |
| 2.3.6 | Ognibene, D., et al. (2025) SCOOP: A Framework for Proactive Collaboration and Social Continual Learning through Natural Language Interaction and Causal Reasoning | https://arxiv.org/pdf/2503.10241 | |
| 2.3.7 | Ranjan, R., et al. (2025) LOKA Protocol: A Decentralized Framework for Trustworthy and Ethical AI Agent Ecosystems | https://arxiv.org/pdf/2504.10915 | |
| 2.3.8 | Liu, J., et al. (2025). ACPs: Agent Collaboration Protocols for the Internet of Agents | https://arxiv.org/abs/2505.13523 | |
| 2.4 | Content Provenance | ||
| 2.4.1 | C2PA Specifications | Coalition for Content Provenance and Authenticity (C2PA) | https://c2pa.org/specifications/specifications/2.0/index.html |
| 2.5 | Cybersecurity and Safety | ||
| 2.5.1 | Voluntary AI Safety Standard | Australian Government – Department of Industry, Science and Resources – National Artificial Intelligence Centre | https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf |
| 2.5.10 | ISO/IEC TS 27022:2021: Information technology – Guidance on information security management system processes | International Organization for Standardization (ISO) | https://www.iso.org/standard/61004.html |
| 2.5.11 | NIST AI 100-2e2023: NIST Trustworthy and Responsible AI: Vassilev, A., et al. (2023). Adversarial Machine Learning – A Taxonomy and Terminology of Attacks and Mitigations | National Institute of Standards and Technology (NIST) | https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf |
| 2.5.12 | 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps | Open Web Application Security Project (OWASP) | https://genai.owasp.org/llm-top-10 |
| 2.5.13 | AI Cyber Security Code of Practice | UK Government – Department for Science, Innovation & Technology | https://www.gov.uk/government/calls-for-evidence/cyber-security-of-ai-a-call-for-views/a-call-for-views-on-the-cyber-security-of-ai#ai-cyber-security-code-of-practice |
| 2.5.14 | Guidelines and Companion Guide on Securing AI Systems | Cyber Security Agency (CSA) of Singapore | https://www.csa.gov.sg/resources/publications/guidelines-and-companion-guide-on-securing-ai-systems |
| 2.5.15 | Databricks AI Security Framework (DASF) 2.0 – An actionable framework for managing AI security | Databricks | https://www.databricks.com/resources/whitepaper/databricks-ai-security-framework-dasf |
| 2.5.2 | Guidelines on Securing AI Systems | Cyber Security Agency of Singapore | https://isomer-user-content.by.gov.sg/36/e05d8194-91c4-4314-87d4-0c0e013598fc/Guidelines%20on%20Securing%20AI%20Systems.pdf |
| 2.5.3 | AI Safety Governance Framework | China’s National Cyber Security Standardization Technical Committee Secretariat | https://www.tc260.org.cn/upload/2024-09-09/1725849192841090989.pdf |
| 2.5.4 | ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) | The MITRE Corporation | https://atlas.mitre.org |
| 2.5.5 | Presidio AI Framework: Towards Safe Generative AI Models | World Economic Forum | https://www3.weforum.org/docs/WEF_Presidio_AI%20Framework_2024.pdf |
| 2.5.6 | Smith, C., et al. (n.d.) Hazard Contribution Modes of Machine Learning Components | https://ntrs.nasa.gov/api/citations/20200001851/downloads/20200001851.pdf | |
| 2.5.7 | IEEE 7009-2024: IEEE Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/7009/7096 |
| 2.5.8 | ETSI GR SAI 006 V1.1.1 (2022-03): Securing Artificial Intelligence (SAI); The role of hardware in security of AI | Secure AI (SAI) ETSI Industry Specification Group (ISG) | https://cdn.standards.iteh.ai/samples/60132/b0afcc3e17f54ee4b7e724e5670b26dc/ETSI-GR-SAI-006-V1-1-1-2022-03-.pdf |
| 2.5.9 | IEEE P3157: Recommended Practice for Vulnerability Test for Machine Learning Models for Computer Vision Applications | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/3157/10876 |
| 2.6 | Red Teaming | ||
| 2.6.1 | Guide to Red Teaming Methodology on AI Safety | Japan AI Safety Institute (AISI) | https://aisi.go.jp/assets/pdf/ai_safety_RT_v1.00_en.pdf |
| 2.6.2 | Red Teaming for GenAI Harms – Revealing the Risks and Rewards for Online Safety | Ofcom | https://www.ofcom.org.uk/siteassets/resources/documents/consultations/discussion-papers/red-teaming/red-teaming-for-gen-ai-harms.pdf |
| 2.6.3 | Walter, M. J., et al. (2024) A Red Teaming Framework for Securing AI in Maritime Autonomous Systems | https://www.tandfonline.com/doi/epdf/10.1080/08839514.2024.2395750 | |
| 2.6.4 | Artificial Intelligence Safety Commitments | China Academy of Information and Communications Technology (CAICT) | https://mp.weixin.qq.com/s/s-XFKQCWhu0uye4opgb3Ng |
| 2.7 | Retrieval Systems | ||
| 2.7.1 | Ammann, L., et al. (2025). Securing RAG: A Risk Assessment and Mitigation Framework | https://arxiv.org/pdf/2505.08728 | |
| 2.8 | Data | ||
| 2.8.1 | ISO/IEC 5259-4:2024 - Artificial intelligence - Data quality for analytics and machine learning (ML) – Part 4: Data quality process framework | International Organization for Standardization (ISO) | https://www.iso.org/standard/81093.html |
| 2.8.2 | Afzal, S., et al. (2021) Data Readiness Report | https://ieeexplore.ieee.org/abstract/document/9592479 | |
| 2.8.3 | Hutchinson, B., et al. (2025) Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure | https://dl.acm.org/doi/pdf/10.1145/3442188.3445918 | |
| 2.8.4 | Data Provenance Standard | Data & Trust Alliance | https://dataandtrustalliance.org/work/data-provenance-standards |
| 2.8.5 | Faridoon, A., et al. (2024). Healthcare Data Governance, Privacy, and Security – A Conceptual Framework | https://arxiv.org/pdf/2403.17648 | |
| 2.8.6 | ISO/IEC 25012:2008 – Software engineering – Software product Quality Requirements and Evaluation (SQuaRE) – Data quality model | International Organization for Standardization (ISO) | https://www.iso.org/standard/35736.html |
| 2.8.7 | ISO/IEC 25024:2015 – Systems and software engineering – Systems and software Quality Requirements and Evaluation (SQuaRE) - Measurement of data quality | International Organization for Standardization (ISO) | https://www.iso.org/standard/35749.html |
| 2.8.8 | Privacy Enhancing Technology (PET): Proposed Guide on Synthetic Data Generation | Agency for Science, Technology and Research and Personal Data Protection Commission (PDPC) Singapore | https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/other-guides/proposed-guide-on-synthetic-data-generation.pdf |
| 2.8.9 | Baack, S. et al. (2025). Dataset Convening -Towards Best Practices for Open Datasets for LLM Training | https://arxiv.org/pdf/2501.08365 | |
| 2.9 | Annotation/Labelling | ||
| 2.9.1 | Best Practices for Managing Data Annotation Projects | Bloomberg (>21,000 employees) | https://assets.bbhub.io/company/sites/40/2020/09/Annotation-Best-Practices-091020-FINAL.pdf |
| 2.9.2 | Data Enrichment Sourcing Guidelines | Partnership on AI | https://partnershiponai.org/wp-content/uploads/2022/11/data-enrichment-guidelines.pdf |
| 2.9.3 | Measures for Labeling Synthetic Content Generated by Artificial Intelligence | Office of the Central Cyberspace Affairs Commission – Cyberspace Administration of China (CAC) | https://www.cac.gov.cn/2025-03/14/c_1743654684782215.htm |
| 2.9.4 | Cybersecurity technology – Labeling method for content generated by artificial intelligence | China’s National Cyber Security Standardization Technical Committee Secretariat | https://www.tc260.org.cn/upload/2025-03-15/1742009439794081593.pdf |
| 3 | Industry-based Frameworks | ||
| 3.1 | Cognitive Technology | ||
| 3.1.1 | Cognitive Project Management in AI (CPMAI)™ v7 - Training & Certification | Project Management Institute | https://www.pmi.org/shop/p-/digital-product/cognitive-project-management-in-ai-(cpmai)-v7---training-,-a-,-certification/cpmai-b-01 |
| 3.2 | Education and Academia | ||
| 3.2.1 | Guidance for generative AI in education and research | United Nations Educational, Scientific and Cultural Organization (UNESCO) | https://unesdoc.unesco.org/ark:/48223/pf0000386693/PDF/386693eng.pdf.multi.page=1 |
| 3.2.2 | IEEE P2247.4 – IEEE Draft Recommended Practice for Ethically Aligned Design of Artificial Intelligence (AI) in Adaptive Instructional Systems (Active PAR) | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2247.4/10368 |
| 3.2.3 | Mann, S. P., et al. (2024). Guidelines for ethical use and acknowledgement of large language models in academic writing | https://www.nature.com/articles/s42256-024-00922-7.epdf?sharing_token=HuXgch8N4TMiIj__TuNSv9RgN0jAjWel9jnR3ZoTv0P3hemWIDPPHmWTcywtbB85sAqgSgUPlEd4rvaS_JR2nwptkIduhXOJnYw13H3HPUNTaIR45uYNT79ia82sOi6bSXQu-cl578SzVjPf3cfCt6rXJuSmUnAtRhz0T97rYaE%3D | |
| 3.3 | Energy | ||
| 3.3.1 | Generative Artificial Intelligence Reference Guide | US Department of Energy | https://www.energy.gov/sites/default/files/2024-12/Generative%20AI%20Reference%20Guide%20v2%206-14-24.pdf |
| 3.4 | Healthcare and Pharmaceuticals | ||
| 3.4.1 | Artificial Intelligence (AI) Ethics Principles – Principles to guide ethical AI use | Roche | https://assets.roche.com/f/176343/x/401c28049f/roche-ai-ethics-principles.pdf |
| 3.4.2 | Ethics and governance of artificial intelligence for health – Guidance on large multi-modal models | World Health Organization (WHO) | https://iris.who.int/bitstream/handle/10665/375579/9789240084759-eng.pdf |
| 3.4.3 | IEC SRD 63416:2023 ED1 – Ethical considerations of artificial intelligence (AI) when applied in the active assisted living (AAL) context | International Electrotechnical Commission (IEC) | https://www.iec.ch/ords/f?p=103:38:209091949154989::::FSP_ORG_ID,FSP_APEX_PAGE,FSP_PROJECT_ID:11827,23,103371 |
| 3.5 | Intelligence | ||
| 3.5.1 | Artificial Intelligence Ethics Framework for the Intelligence Community | Office of the Director of National Intelligence (US) | https://www.intelligence.gov/ai/ai-ethics-framework |
| 3.5.2 | Principles of Artificial Intelligence Ethics for the Intelligence Community | Office of the Director of National Intelligence (US) | https://www.intelligence.gov/assets/documents/pdf/ai/principles-of-ai-ethics-for-the-ic.pdf |
| 3.6 | Legal Services | ||
| 3.6.1 | Draft UNESCO Guidelines for the Use of AI Systems in Courts and Tribunals | United Nations Educational, Scientific and Cultural Organization (UNESCO) | https://unesdoc.unesco.org/ark:/48223/pf0000390781 |
| 3.7 | Media | ||
| 3.7.1 | PAI’s Responsible Practices for Synthetic Media – A Framework for Collective Action | Partnership on AI (PAI) | https://syntheticmedia.partnershiponai.org |
| 3.7.2a | AI and Responsible Journalism Toolkit: Education | Desirable.AI (Leverhulme Centre for the Future of Intelligence – University of Cambridge, UK, and Center for Science and Thought – University of Bonn, Germany | https://www.desirableai.com/journalism-toolkit-edu |
| 3.7.2b | AI and Responsible Journalism Toolkit: Ethics/Policy | Desirable.AI (Leverhulme Centre for the Future of Intelligence – University of Cambridge, UK, and Center for Science and Thought – University of Bonn, Germany | https://www.desirableai.com/journalism-toolkit-ethics |
| 3.7.2c | AI and Responsible Journalism Toolkit: Voices | Desirable.AI (Leverhulme Centre for the Future of Intelligence – University of Cambridge, UK, and Center for Science and Thought – University of Bonn, Germany | https://www.desirableai.com/journalism-toolkit-voices |
| 3.7.2d | AI and Responsible Journalism Toolkit: Structures | Desirable.AI (Leverhulme Centre for the Future of Intelligence – University of Cambridge, UK, and Center for Science and Thought – University of Bonn, Germany | https://www.desirableai.com/journalism-toolkit-orgs |
| 3.7.3 | AI Manifesto | Blue Zoo Media Group Ltd (London, UK) | https://www.blue-zoo.co.uk/policies/ai-manifesto |
| 3.8 | Psychology and Mental Health | ||
| 3.8.1 | Steenstra, I., et al. (2025) A Risk Taxonomy for Evaluating AI-Powered Psychotherapy Agents | https://arxiv.org/pdf/2505.15108 | |
| 3.9 | Public sector/Government | ||
| 3.9.1 | Policy for the responsible use of AI in government (Version 1.1) | Australian Government – Digital Transformation Agency | https://www.digital.gov.au/sites/default/files/documents/2024-08/Policy%20for%20the%20responsible%20use%20of%20AI%20in%20government%20v1.1.pdf |
| 3.9.2 | Engin, Z., et al. (2025) The Algorithmic State Architecture (ASA): An Integrated Framework for AI-Enabled Government | https://arxiv.org/pdf/2503.08725 | |
| 4 | Role-based Frameworks | ||
| 4.1 | Investors | ||
| 4.1.1 | RESPONSIBLE AI STARTUPS (RAIS) Framework | Radical Ventures | https://github.com/radicalventures/RAIS-Framework |
| 4.2 | Boards | ||
| 4.2.1 | AI Governance Framework for Boards | Anekanta AI | https://anekanta.co.uk/ai-governance-and-compliance/anekanta-responsible-ai-governance-framework-for-boards |
| 4.3 | Startup Founders | ||
| 4.4 | Responsible Innovation Labs | ||
| 4.4.1a | Responsible AI Framework 101 – Pre-product and/or raised a Pre-seed or Seed | Responsible Innovation Labs (RIL) | https://docs.google.com/presentation/d/e/2PACX-1vSHVxh-HtpzXH1z_PmPJturxV9fXMbhvE6NjJuZLu3FFtmKrC-aDHV66mKF9yLe4eCT7UDmMLuuI7GA/pub?start=false&loop=false&delayms=3000&slide=id.g2ef4f43a125_0_152 |
| 4.4.1b | Responsible AI Framework 201 – Post-product and raised a Seed or Series A | Responsible Innovation Labs (RIL) | https://docs.google.com/presentation/d/e/2PACX-1vR337zZUkx9FwWGepmh3UaoQ5pYNLpKcESVNADR76mADhWg0nw4trS0Wwxi5u1-hG7PJBjHGRN9DEeY/pub?start=false&loop=false&delayms=3000&slide=id.g2ef4f7411db_0_116 |
| 4.4.1c | Responsible AI Framework 301 – Scaling and raised a Series B or more | Responsible Innovation Labs (RIL) | https://docs.google.com/presentation/d/e/2PACX-1vSAOvD1N8Rx6MR89rD2p--UsXjT9XzljhU9nVZQ7QGLIvel8qf93KKe1V__uih70v4HJyYxhjmhLTUW/pub?start=false&loop=false&delayms=3000&slide=id.g2ef4f74123a_0_113 |
| 4.4.2 | Responsible AI Framework v2 (Scaling and raised a Series B or more) | Responsible Innovation Labs (RIL) | https://www.rilabs.org/responsible-ai#RAI-v2 |
| 4.5 | Leadership and Executives | ||
| 4.5.1 | Empowering AI Leadership – An Oversight Toolkit for Boards of Directors | World Economic Forum | https://express.adobe.com/page/RsXNkZANwMLEf |
| 4.5.2 | IEEE P2863 – Recommended Practice for Organizational Governance of Artificial Intelligence (Active PAR) | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/2863/10142 |
| 4.5.3 | XP Z77-101 – Guide of good practices in matters of governance of ethical approaches within organizations | Afnor Editions | https://www.boutique.afnor.org/en-gb/standard/xp-z77101/guide-of-good-practices-in-matters-of-governance-of-ethical-approaches-with/fa200187/263408 |
| 4.6 | Information Technology | ||
| 4.6.1 | ISO/IEC TS 38501:2015 – Information technology — Governance of IT — Implementation guide | International Organization for Standardization (ISO) | https://www.iso.org/standard/45263.html |
| 4.7 | Procurement | ||
| 4.7.1 | IEEE 3119-2025 – IEEE Standard for the Procurement of Artificial Intelligence and Automated Decision Systems | Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) | https://standards.ieee.org/ieee/3119/10729 |
| 4.7.2 | AFNOR SPEC Z77-100-0 - Contractualization of AI systems | ||
| 4.8 | Human Resources | ||
| 4.8.1 | Guidelines for AI and Shared Prosperity – Tools for improving AI’s impact on jobs | Partnership on AI (PAI) | https://partnershiponai.org/wp-content/uploads/dlm_uploads/2023/06/pai_guidelines_shared_prosperity.pdf |
| 4.8.2 | Framework for Promoting Workforce Well-being in the AI-Integrated Workplace | Partnership on AI (PAI) | https://partnershiponai.org/download/4059 |
| 4.9 | Marketing & Advertising | ||
| 4.9.1 | ANA Ethics Code of Marketing Best Practices – Digital Innovation (AI, Machine Learning, and Automated Processing) | Association of National Advertisers (ANA) | https://www.ana.net/content/show/id/accountability-chan-ethicscode-final#bookmark52 |