Case Study 1: AI-Driven Healthcare Diagnostics
Accenture. (2024). From compliance to confidence: Embracing a new mindset to advance responsible AI maturity. https://www.accenture.com/us-en/insights/data-ai/compliance-confidence-responsible-ai-maturity
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety [Preprint]. arXiv. http://arxiv.org/abs/1606.06565
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2019). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [Preprint]. arXiv. https://arxiv.org/abs/1910.10045
Bateni, A., Chan, M. C., & Eitel-Porter, R. (2022). AI fairness: From principles to practice [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2207.09833
Batool, A., Zowghi, D., & Bano, M. (2023). Responsible AI governance: A systematic literature review [Preprint]. arXiv. http://arxiv.org/abs/2401.10896
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In S. A. Friedler & C. Wilson (Eds.), In Proceedings of the 1st conference on Fairness, Accountability, and Transparency (pp. 149–159). PMLR. http://proceedings.mlr.press/v81/binns18a/binns18a.pdf
Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: themes, knowledge gaps and future agendas. Internet Research (Vol. 33, Issue 7, pp. 133–167). https://doi.org/10.1108/INTR-01-2022-0042
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., … Liang, P. (2021). On the opportunities and risks of foundation models [Preprint]. arXiv. https://arxiv.org/abs/2108.07258
Braiek, H. Ben, & Khomh, F. (2024). Machine learning robustness: A primer [Preprint]. arXiv. http://arxiv.org/abs/2404.00897
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., Ó Éigeartaigh, S., Beard, S., Belfield, H., Farquhar, S., … Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation [Preprint]. arXiv. https://arxiv.org/abs/1802.07228
Buijsman, S. (2024). Transparency for AI systems: a value-based approach. Ethics and Information Technology, 26(2). https://doi.org/10.1007/s10676-024-09770-w
Bullock, J. B., Chen, Y.-C., Himmelreich, J., Hudson, V. M., Korinek, A., Young, M. M., & Zhang, B. (Eds.). (2024). The Oxford handbook of AI governance. Oxford University Press. https://academic.oup.com/edited-volume/41989
Busuioc, M. (2021). Accountable artificial intelligence: Holding algorithms to account. Public Administration Review, 81(5), 825–836. https://doi.org/10.1111/puar.13293
Carayannis, E. G., & Grigoroudis, E. (Eds.). (2023). Handbook of research on artificial intelligence, innovation and entrepreneurship. Edward Elgar Publishing Limited. Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I., Madry, A., & Kurakin, A. (2019). On evaluating adversarial robustness [Preprint]. arXiv. http://arxiv.org/abs/1902.06705
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015-August, 1721–1730. https://doi.org/10.1145/2783258.2788613
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. In Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (Vol. 376, Issue 2133). Royal Society Publishing. https://doi.org/10.1098/rsta.2018.0080
Cheong, B. C. (2024). Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6. https://doi.org/10.3389/fhumd.2024.1421273
Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A., Mathur, V., McElroy, E., Sánchez, A. N., Raji, D., Rankin, J. L., Richardson, R., Schultz, J., West, S. M., & Whittaker, M. (2019). AI Now 2019 report. AI Now Institute. https://ainowinstitute.org/wp-content/uploads/2023/04/AI_Now_2019_Report.pdf
d’Aliberti, L., Gronberg, E., & Kovba, J. (2024). Privacy-enhancing technologies for artificial intelligence-enabled systems [Preprint]. arXiv. http://arxiv.org/abs/2404.03509
Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017). https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning [Preprint]. arXiv. http://arxiv.org/abs/1702.08608
Dubber, M. D., Pasquale, F., & Das, S. (Eds.). (2020). The Oxford handbook of ethics of AI. Oxford University Press. https://academic.oup.com/edited-volume/34287
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2011). Fairness through awareness [Preprint]. arXiv. http://arxiv.org/abs/1104.3913
European Parliament. (2017). The AI act (Directive TA-8-2017-0051). http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf
Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2014). Certifying and removing disparate impact [Preprint]. arXiv. http://arxiv.org/abs/1412.3756
Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. https://www.mdpi.com/2413-4155/6/1/3
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI (Berkman Klein Center Research Publication No. 2020-1). https://doi.org/10.2139/ssrn.3518482
Floridi, L., & Taddeo, M. (2016). What is data ethics? In Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (Vol. 374, Issue 2083). Royal Society of London. https://doi.org/10.1098/rsta.2016.0360
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Fosch-Villaronga, E., & Poulsen, A. (2022). Diversity and inclusion in artificial intelligence. In B. Custers & E. Fosch-Villaronga (Eds.), Law and Artificial Intelligence (pp. 109–134). https://doi.org/10.1007/978-94-6265-523-2_6
Gasser, U., & Almeida, V. A. F. (2017). A layered model for AI governance. IEEE Internet Computing, 21(6), 58–62. https://doi.org/10.1109/MIC.2017.4180835
Gillis, R., Laux, J., & Mittelstadt, B. (2024). Trust and trustworthiness in artificial intelligence. In R. Paul, E. Carmel, & J. Cobbe (Eds.), Handbook on artificial intelligence and public policy. Edward Elgar. https://doi.org/10.2139/ssrn.4688574
Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples [Preprint]. arXiv. https://arxiv.org/abs/1412.6572
Google. (2023). Google AI principles. https://ai.google/responsibility/principles/
Government of Canada. (2023). The artificial intelligence and data act (AIDA): Companion document. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
GSMA. (2024). The GSMA responsible AI maturity roadmap. https://www.gsma.com/solutions-and-impact/connectivity-for-good/external-affairs/wp-content/uploads/2024/09/GSMA-ai4i_The-GSMA-Responsible-AI-Maturity-Roadmap_v8.pdf
Guihot, M., Matthew, A. F., & Suzor, N. P. (2017). Nudging robots: Innovative solutions to regulate artificial intelligence. https://ssrn.com/abstract=3017004
Habbal, A., Ali, M. K., & Abuzaraida, M. A. (2024). Artificial intelligence trust, risk and security management (AI TRiSM): Frameworks, applications, challenges and future research directions. Expert Systems with Applications (Vol. 240). https://doi.org/10.1016/j.eswa.2023.122442
Hamon, R., Junklewitz, H., & Sanchez Martin, J. I. (2020). Robustness and explainability of artificial intelligence: from technical to policy solutions. Publications Office of the European Union. https://publications.jrc.ec.europa.eu/repository/handle/JRC119336
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, 3323–3331. https://proceedings.neurips.cc/paper_files/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf
Haresamudram, K., Larsson, S., & Heintz, F. (2023). Three levels of AI transparency. Computer, 56(2), 93–100. https://doi.org/10.1109/MC.2022.3213181
Hendrycks, D., & Dietterich, T. G. (2018). Benchmarking neural network robustness to common corruptions and surface variations [Preprint]. arXiv. http://arxiv.org/abs/1807.01697
Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An overview of catastrophic AI risks. arXiv. http://arxiv.org/abs/2306.12001
High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
High-Level Expert Group on Artificial Intelligence. (2020). Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment. European Commission. https://doi.org/10.2759/791819
Holstein, K., Vaughan, J. W., Daumé, H., Dudík, M., & Wallach, H. (2019, May 2). Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3290605.3300830
Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., Li, W., & Li, K. (2021). Artificial intelligence security: Threats and countermeasures. ACM Computing Surveys 55(1). Association for Computing Machinery. https://doi.org/10.1145/3487890
IBM. (2024). IBM AI ethics. https://www.ibm.com/impact/ai-ethics
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., D’Oliveira, R. G. L., Eichner, H., El Rouayheb, S., Evans, D., Gardner, J., Garrett, Z., Gascón, A., Ghazi, B., Gibbons, P. B., … Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends in Machine Learning (Vol. 14, Issues 1–2, pp. 1–210). Now Publishers Inc. https://doi.org/10.1561/2200000083
Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33. https://doi.org/10.1007/s10115-011-0463-8
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores [Preprint]. arXiv. http://arxiv.org/abs/1609.05807
Kusner, M. J., Loftus, J. R., Russell, C., & Silva, R. (2017). Counterfactual fairness [Preprint]. arXiv. http://arxiv.org/abs/1703.06856
Leslie, D., Rincón, C., Briggs, M., Perini, A., Jayadeva, S., Borda, A., Bennett, S., Burr, C., Aitken, M., Katell, M., Fischer, C., Wong, J., & Garcia, I. K. (2024). AI fairness in practice. The Alan Turing Institute. https://www.turing.ac.uk/sites/default/files/2023-12/aieg-ati-fairness_1.pdf
Leslie, D. (2019). Understanding artificial intelligence ethics and safety. https://doi.org/10.5281/zenodo.3240529
Lipton, Z. C. (2016). The mythos of model interpretability [Preprint]. arXiv. https://arxiv.org/abs/1606.03490
Lu, Q., Zhu, L., Whittle, J., & Xu, X. (2024). Responsible AI: Best practices for creating trustworthy AI systems. Pearson Education. https://www.pearson.com/en-us/subject-catalog/p/responsible-ai-best-practices-for-creating-trustworthy-ai-systems/P200000010211/9780138073886
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions [Preprint]. arXiv. https://arxiv.org/abs/1705.07874
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning [Preprint]. arXiv. http://arxiv.org/abs/1908.09635
Microsoft. (2022). Microsoft responsible AI standard, v2. https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
Monday, L. M. (2022). Define, measure, analyze, improve, control (DMAIC) Methodology as a roadmap in quality improvement. Global Journal on Quality and Safety in Healthcare, 5(2), 44–46. https://doi.org/10.36401/jqsh-22-x2
Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: What it is and how it works. AI and Society, 39(4), 1871–1882. https://doi.org/10.1007/s00146-023-01635-y
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://www.science.org/doi/10.1126/science.aax2342
Office of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act): Text with EEA relevance. http://data.europa.eu/eli/reg/2024/1689/oj
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. https://www.hup.harvard.edu/books/9780674970847
Rahwan, I. (2017). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(2018), 5–14. https://doi.org/10.1007/s10676-017-9430-8
Raji, I. D., & Dobbe, R. (2023). Concrete problems in AI safety, revisited [Preprint]. arXiv. https://arxiv.org/abs/2401.10899
Responsible Artificial Intelligence Institute. (2024). Our responsible AI maturity model. https://www.responsible.ai/our-responsible-ai-maturity-model
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13-17-August-2016, 1135–1144. https://doi.org/10.1145/2939672.2939778
Rohde, F., Wagner, J., Meyer, A., Reinhard, P., Voss, M., Petschow, U., & Mollen, A. (2023). Broadening the perspective for sustainable AI: Sustainability criteria and indicators for artificial intelligence systems [Preprint]. arXiv. https://arxiv.org/abs/2306.13686
Rolnick, D., Veit, A., Belongie, S., & Shavit, N. (2017). Deep learning is robust to massive label noise [Preprint]. arXiv. http://arxiv.org/abs/1705.10694
Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., Ross, A. S., Milojevic-Dupont, N., Jaques, N., Waldman-Brown, A., Luccioni, A. S., Maharaj, T., Sherwin, E. D., Mukkavilli, S. K., Kording, K. P., Gomes, C. P., Ng, A. Y., Hassabis, D., Platt, J. C., … Bengio, Y. (2023). Tackling climate change with machine learning. ACM Computing Surveys, 55(2), 1–96. https://doi.org/10.1145/3485128
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence (Vol. 1, Issue 5, pp. 206–215). Nature Research. https://doi.org/10.1038/s42256-019-0048-x
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson. https://www.pearson.com/en-ca/subject-catalog/p/artificial-intelligence-a-modern-approach/P200000003500/9780134610993
Russell, S. (2019). Human compatible artificial intelligence and the problem of control. https://doi.org/10.1007/978-3-030-86144-5_3
Saura, J. R., & Debasa, F. (2022). Handbook of research on artificial intelligence in government practices and processes. IGI Global. https://www.igi-global.com/book/handbook-research-artificial-intelligence-government/279857
Schmidpeter, R., & Altenburger, R. (Eds.). (2023). Responsible artificial intelligence: challenges for sustainable management. Springer. https://link.springer.com/book/10.1007/978-3-031-09245-9
Shams, R. A., Zowghi, D., & Bano, M. (2023). AI and the quest for diversity and inclusion: A systematic literature review. AI and Ethics. https://doi.org/10.1007/s43681-023-00362-w
Siau, K. L., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53. https://www.researchgate.net/publication/324006061
Solove, D. J. (2025). Artificial intelligence and privacy. Florida Law Review, 77(1), 1–73. https://doi.org/10.2139/ssrn.4713111
Solow-Niederman, A. (2022). Information privacy and the inference economy. Northwestern University Law Review, 117(2). https://scholarlycommons.law.northwestern.edu/nulr/vol117/iss2/1/
Tocchetti, A., Corti, L., Balayn, A., Yurrita, M., Lippmann, P., Brambilla, M., & Yang, J. (2022). A.I. robustness: A human-centered perspective on technological challenges and opportunities [Preprint]. arXiv. https://arxiv.org/abs/2210.08906
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & van Moorsel, A. (2019). The relationship between trust in AI and trustworthy machine learning technologies [Preprint]. arXiv. http://arxiv.org/abs/1912.00782
UNESCO. (2022). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213–218. https://doi.org/10.1007/s43681-021-00043-6
Voeneky, S., Kellmeyer, P., Mueller, O., & Burgard, W. (Eds.). (2022). The Cambridge handbook of responsible artificial intelligence: Interdisciplinary perspectives. Cambridge University Press. https://doi.org/10.1017/9781009207898
Vorvoreanu, M., Heger, A., Passi, S., Dhanorkar, S., Kahn, Z., & Wang, R. (2023). Responsible AI maturity model: Mapping your organization’s goals on the path to responsible AI (Microsoft White Paper). Microsoft. https://www.microsoft.com/en-us/research/uploads/prod/2023/05/RAI_Maturity_Model_Aether_Microsoft_whitepaper.pdf
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review (Vol. 2019). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829
Weller, A. (2017). Transparency: Motivations and challenges [Preprint]. arXiv. http://arxiv.org/abs/1708.01870
Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT), 1–18. https://doi.org/10.1145/3351095.3372833
Williams, H. M., & Yampolskiy, R. V. (2024). Understanding and avoiding AI failures: A practical guide [Preprint]. arXiv. http://arxiv.org/abs/2104.12582
Wirtz, B. W., Langer, P. F., & Weyerer, J. C. (2024). An ecosystem framework of AI governance. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford handbook of AI governance. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.24
World Health Organization. (2021). Ethics and governance of artificial intelligence for health: WHO guidance. World Health Organization. https://www.who.int/publications/i/item/9789240029200
Yeung, K., Howes, A., & Pogrebna, G. (2019). AI governance by human rights-centred design, deliberation and oversight: An end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI. Oxford University Press.
Zafar, M. B., Valera, I., Rodriguez, M. G., & Gummadi, K. P. (2015). Fairness constraints: Mechanisms for fair classification [Preprint]. arXiv. http://arxiv.org/abs/1507.05259
Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning [Preprint]. arXiv. http://arxiv.org/abs/1801.07593
Zowghi, D., & Da Rimini, F. (2024). Diversity and inclusion in artificial intelligence. In Q. Lu, L. Zhu, J. Whittle, & X. Xu (Eds.), Responsible AI: Best practices for creating trustworthy AI systems (Chap. 11). Pearson Education. https://www.pearson.com/en-us/subject-catalog/p/responsible-ai-best-practices-for-creating-trustworthy-ai-systems/P200000010211/9780138073886