| Inter-Pillar Relationships |
| Pillar: Operational Integrity |
|
Governance vs. Security |
■ Reinforcing
|
Governance strengthens security by setting protocols and standards to protect against AI threats
(Bullock et al., 2024 )
|
Governance mandates security audits in AI deployments to ensure adherence to best practices and protocols
(Habbal et al., 2024 )
|
|
Robustness vs. Security |
■ Reinforcing
|
Robustness strengthens security against adversarial attacks, enhancing overall system reliability
(Habbal et al., 2024 )
|
Robust AI enhances security by withstanding data poisoning, crucial in cybersecurity
(Habbal et al., 2024 )
|
|
Interpretability vs. Security |
■ Tensioned
|
Security demands limited openness; interpretability requires transparency, creating inherent conflict
(Bommasani et al., 2021 )
|
Interpretable models in healthcare might expose vulnerabilities if too transparent, affecting security
(Rudin, 2019 )
|
|
Explainability vs. Security |
■ Tensioned
|
Explainability can expose security vulnerabilities by revealing AI system operations
(Hamon et al., 2020 )
|
Detailed explanations of security systems can aid adversaries in identifying exploitable weaknesses
(Hamon et al., 2020 )
|
|
Safety vs. Security |
■ Tensioned
|
Adversarial robustness efforts enhance security but may reduce safety by increasing complexity
(Braiek & Khomh, 2024 )
|
Autonomous vehicle safety protocols might focus on preventing adversarial attacks at the expense of real-world robustness
(Leslie, 2019 )
|
| Cross-Pillar Relationships |
| Pillar: Ethical Safeguards vs. Operational Integrity |
|
Fairness vs. Security |
■ Tensioned
|
Fairness needs data transparency, often conflicting with strict security protocols prohibiting data access
(Leslie et al., 2024 )
|
Ensuring fair user data access can compromise data security boundaries, posing organizational security risks
(Leslie et al., 2024 )
|
|
Inclusiveness vs. Security |
■ Tensioned
|
Inclusiveness in AI can expose it to vulnerabilities, challenging security measures
(Fosch-Villaronga & Poulsen, 2022 ;
Zowghi & Da Rimini, 2024 )
|
Inclusive AI systems might prioritize accessibility but compromise security, as noted when addressing diverse infrastructures
(Microsoft, 2022 )
|
|
Bias Mitigation vs. Security |
■ Reinforcing
|
Bias mitigation enhances security by reducing vulnerabilities that arise from discriminatory models
(Habbal et al., 2024 )
|
Including bias audits in AI-driven fraud detection systems strengthens security protocols
(Habbal et al., 2024 )
|
|
Accountability vs. Security |
■ Reinforcing
|
Accountability enhances security by ensuring responsible data management and risk identification
(Voeneky et al., 2022 )
|
Regular audits on AI systems’ security protocols ensure accountability and safety for data governance
(Voeneky et al., 2022 )
|
|
Privacy vs. Security |
■ Reinforcing
|
Both privacy and security strive for safeguarding sensitive data, aligning objectives
(Hu et al., 2021 )
|
Using encryption methods, AI systems ensure privacy while maintaining security, protecting data integrity
(Hu et al., 2021 )
|
| Pillar: Operational Integrity vs. Societal Empowerment |
|
Security vs. Sustainability |
■ Neutral
|
Security and sustainability address different areas, with minimal direct overlap in AI system design
(van Wynsberghe, 2021 )
|
An AI system could be secure without considering sustainability impacts like energy use
(van Wynsberghe, 2021 )
|
|
Human Oversight vs. Security |
■ Reinforcing
|
Human oversight enhances security by providing checks against unauthorized access and misuse in AI systems
(Lu et al., 2024 )
|
Security protocols are strengthened by human oversight to monitor potential AI system breaches
(Dubber et al., 2020 )
|
|
Security vs. Transparency |
■ Tensioned
|
Security needs might impede transparency efforts, as disclosure could expose vulnerabilities
(Bullock et al., 2024 )
|
When AI transparency compromises security, it can lead to potential breaches, hindering open communications
(Bullock et al., 2024 )
|
|
Security vs. Trustworthiness |
■ Reinforcing
|
Security underpins trustworthiness by safeguarding AI from breaches, thus enhancing reliability
(Lu et al., 2024 )
|
Secure AI systems, protected against data breaches, inherently build user trust
(Lu et al., 2024 )
|