Accenture, 2024

Accenture. (2024). From compliance to confidence: Embracing a new mindset to advance responsible AI maturity. https://www.accenture.com/us-en/insights/data-ai/compliance-confidence-responsible-ai-maturity

Amodei et al., 2016

Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety [Preprint]. arXiv. http://arxiv.org/abs/1606.06565

Ananny & Crawford, 2018

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645

Barredo Arrieta et al., 2019

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2019). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [Preprint]. arXiv. https://arxiv.org/abs/1910.10045

Bateni et al., 2022

Bateni, A., Chan, M. C., & Eitel-Porter, R. (2022). AI fairness: From principles to practice [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2207.09833

Batool et al., 2023

Batool, A., Zowghi, D., & Bano, M. (2023). Responsible AI governance: A systematic literature review [Preprint]. arXiv. http://arxiv.org/abs/2401.10896

Binns, 2018

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In S. A. Friedler & C. Wilson (Eds.), In Proceedings of the 1st conference on Fairness, Accountability, and Transparency (pp. 149–159). PMLR. http://proceedings.mlr.press/v81/binns18a/binns18a.pdf

Birkstedt et al., 2023

Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: themes, knowledge gaps and future agendas. Internet Research (Vol. 33, Issue 7, pp. 133–167). https://doi.org/10.1108/INTR-01-2022-0042

Bommasani et al., 2021

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., … Liang, P. (2021). On the opportunities and risks of foundation models [Preprint]. arXiv. https://arxiv.org/abs/2108.07258

Braiek & Khomh, 2024

Braiek, H. Ben, & Khomh, F. (2024). Machine learning robustness: A primer [Preprint]. arXiv. http://arxiv.org/abs/2404.00897

Brundage et al., 2018

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., Ó Éigeartaigh, S., Beard, S., Belfield, H., Farquhar, S., … Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation [Preprint]. arXiv. https://arxiv.org/abs/1802.07228

Buijsman, 2024

Buijsman, S. (2024). Transparency for AI systems: a value-based approach. Ethics and Information Technology, 26(2). https://doi.org/10.1007/s10676-024-09770-w

Bullock et al., 2024

Bullock, J. B., Chen, Y.-C., Himmelreich, J., Hudson, V. M., Korinek, A., Young, M. M., & Zhang, B. (Eds.). (2024). The Oxford handbook of AI governance. Oxford University Press. https://academic.oup.com/edited-volume/41989

Busuioc, 2021

Busuioc, M. (2021). Accountable artificial intelligence: Holding algorithms to account. Public Administration Review, 81(5), 825–836. https://doi.org/10.1111/puar.13293

Carayannis & Grigoroudis, 2023

Carayannis, E. G., & Grigoroudis, E. (Eds.). (2023). Handbook of research on artificial intelligence, innovation and entrepreneurship. Edward Elgar Publishing Limited. Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I., Madry, A., & Kurakin, A. (2019). On evaluating adversarial robustness [Preprint]. arXiv. http://arxiv.org/abs/1902.06705

Caruana et al. (2015)

Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015-August, 1721–1730. https://doi.org/10.1145/2783258.2788613

Cath, 2018

Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. In Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (Vol. 376, Issue 2133). Royal Society Publishing. https://doi.org/10.1098/rsta.2018.0080

Cheong, 2024

Cheong, B. C. (2024). Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6. https://doi.org/10.3389/fhumd.2024.1421273

Crawford et al., 2019

Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A., Mathur, V., McElroy, E., Sánchez, A. N., Raji, D., Rankin, J. L., Richardson, R., Schultz, J., West, S. M., & Whittaker, M. (2019). AI Now 2019 report. AI Now Institute. https://ainowinstitute.org/wp-content/uploads/2023/04/AI_Now_2019_Report.pdf

d’Aliberti et al., 2024

d’Aliberti, L., Gronberg, E., & Kovba, J. (2024). Privacy-enhancing technologies for artificial intelligence-enabled systems [Preprint]. arXiv. http://arxiv.org/abs/2404.03509

Danks & London, 2017

Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017). https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf

Doshi-Velez & Kim, 2017

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning [Preprint]. arXiv. http://arxiv.org/abs/1702.08608

Dubber et al., 2020

Dubber, M. D., Pasquale, F., & Das, S. (Eds.). (2020). The Oxford handbook of ethics of AI. Oxford University Press. https://academic.oup.com/edited-volume/34287

Dwork et al., 2011

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2011). Fairness through awareness [Preprint]. arXiv. http://arxiv.org/abs/1104.3913

European Parliament, 2017

European Parliament. (2017). The AI act (Directive TA-8-2017-0051). http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf

Feldman et al., 2014

Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2014). Certifying and removing disparate impact [Preprint]. arXiv. http://arxiv.org/abs/1412.3756

Ferrara, 2024

Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. https://www.mdpi.com/2413-4155/6/1/3

Fjeld et al., 2020

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI (Berkman Klein Center Research Publication No. 2020-1). https://doi.org/10.2139/ssrn.3518482

Floridi & Taddeo, 2016

Floridi, L., & Taddeo, M. (2016). What is data ethics? In Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (Vol. 374, Issue 2083). Royal Society of London. https://doi.org/10.1098/rsta.2016.0360

Floridi et al., 2018

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Fosch-Villaronga & Poulsen, 2022

Fosch-Villaronga, E., & Poulsen, A. (2022). Diversity and inclusion in artificial intelligence. In B. Custers & E. Fosch-Villaronga (Eds.), Law and Artificial Intelligence (pp. 109–134). https://doi.org/10.1007/978-94-6265-523-2_6

Gasser & Almeida, 2017

Gasser, U., & Almeida, V. A. F. (2017). A layered model for AI governance. IEEE Internet Computing, 21(6), 58–62. https://doi.org/10.1109/MIC.2017.4180835

Gillis et al., 2024

Gillis, R., Laux, J., & Mittelstadt, B. (2024). Trust and trustworthiness in artificial intelligence. In R. Paul, E. Carmel, & J. Cobbe (Eds.), Handbook on artificial intelligence and public policy. Edward Elgar. https://doi.org/10.2139/ssrn.4688574

Goodfellow et al., 2014

Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples [Preprint]. arXiv. https://arxiv.org/abs/1412.6572

Google, 2023

Google. (2023). Google AI principles. https://ai.google/responsibility/principles/

Government of Canada, 2023

Government of Canada. (2023). The artificial intelligence and data act (AIDA): Companion document. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document

GSMA, 2024

GSMA. (2024). The GSMA responsible AI maturity roadmap. https://www.gsma.com/solutions-and-impact/connectivity-for-good/external-affairs/wp-content/uploads/2024/09/GSMA-ai4i_The-GSMA-Responsible-AI-Maturity-Roadmap_v8.pdf

Guihot et al., 2017

Guihot, M., Matthew, A. F., & Suzor, N. P. (2017). Nudging robots: Innovative solutions to regulate artificial intelligence. https://ssrn.com/abstract=3017004

Habbal et al., 2024

Habbal, A., Ali, M. K., & Abuzaraida, M. A. (2024). Artificial intelligence trust, risk and security management (AI TRiSM): Frameworks, applications, challenges and future research directions. Expert Systems with Applications (Vol. 240). https://doi.org/10.1016/j.eswa.2023.122442

Hamon et al., 2020

Hamon, R., Junklewitz, H., & Sanchez Martin, J. I. (2020). Robustness and explainability of artificial intelligence: from technical to policy solutions. Publications Office of the European Union. https://publications.jrc.ec.europa.eu/repository/handle/JRC119336

Hardt et al., 2016

Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, 3323–3331. https://proceedings.neurips.cc/paper_files/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf

Haresamudram et al., 2023

Haresamudram, K., Larsson, S., & Heintz, F. (2023). Three levels of AI transparency. Computer, 56(2), 93–100. https://doi.org/10.1109/MC.2022.3213181

Hendrycks & Dietterich, 2018

Hendrycks, D., & Dietterich, T. G. (2018). Benchmarking neural network robustness to common corruptions and surface variations [Preprint]. arXiv. http://arxiv.org/abs/1807.01697

Hendrycks et al., 2023

Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An overview of catastrophic AI risks. arXiv. http://arxiv.org/abs/2306.12001

High-Level Expert Group on Artificial Intelligence, 2019

High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

High-Level Expert Group on Artificial Intelligence, 2020

High-Level Expert Group on Artificial Intelligence. (2020). Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment. European Commission. https://doi.org/10.2759/791819

Holstein et al., 2019

Holstein, K., Vaughan, J. W., Daumé, H., Dudík, M., & Wallach, H. (2019, May 2). Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3290605.3300830

Hu et al., 2021

Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., Li, W., & Li, K. (2021). Artificial intelligence security: Threats and countermeasures. ACM Computing Surveys 55(1). Association for Computing Machinery. https://doi.org/10.1145/3487890

IBM, 2024

IBM. (2024). IBM AI ethics. https://www.ibm.com/impact/ai-ethics

Jobin et al., 2019

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Kairouz et al., 2021

Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., D’Oliveira, R. G. L., Eichner, H., El Rouayheb, S., Evans, D., Gardner, J., Garrett, Z., Gascón, A., Ghazi, B., Gibbons, P. B., … Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends in Machine Learning (Vol. 14, Issues 1–2, pp. 1–210). Now Publishers Inc. https://doi.org/10.1561/2200000083

Kamiran & Calders, 2012

Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33. https://doi.org/10.1007/s10115-011-0463-8

Kleinberg et al., 2016

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores [Preprint]. arXiv. http://arxiv.org/abs/1609.05807

Kusner et al., 2017

Kusner, M. J., Loftus, J. R., Russell, C., & Silva, R. (2017). Counterfactual fairness [Preprint]. arXiv. http://arxiv.org/abs/1703.06856

Leslie et al., 2024

Leslie, D., Rincón, C., Briggs, M., Perini, A., Jayadeva, S., Borda, A., Bennett, S., Burr, C., Aitken, M., Katell, M., Fischer, C., Wong, J., & Garcia, I. K. (2024). AI fairness in practice. The Alan Turing Institute. https://www.turing.ac.uk/sites/default/files/2023-12/aieg-ati-fairness_1.pdf

Leslie, 2019

Leslie, D. (2019). Understanding artificial intelligence ethics and safety. https://doi.org/10.5281/zenodo.3240529

Lipton, 2016

Lipton, Z. C. (2016). The mythos of model interpretability [Preprint]. arXiv. https://arxiv.org/abs/1606.03490

Lu et al., 2024

Lu, Q., Zhu, L., Whittle, J., & Xu, X. (2024). Responsible AI: Best practices for creating trustworthy AI systems. Pearson Education. https://www.pearson.com/en-us/subject-catalog/p/responsible-ai-best-practices-for-creating-trustworthy-ai-systems/P200000010211/9780138073886

Lundberg & Lee, 2017

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions [Preprint]. arXiv. https://arxiv.org/abs/1705.07874

Mehrabi et al., 2019

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning [Preprint]. arXiv. http://arxiv.org/abs/1908.09635

Microsoft, 2022

Microsoft. (2022). Microsoft responsible AI standard, v2. https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf

Mittelstadt, 2019

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4

Monday, 2022

Monday, L. M. (2022). Define, measure, analyze, improve, control (DMAIC) Methodology as a roadmap in quality improvement. Global Journal on Quality and Safety in Healthcare, 5(2), 44–46. https://doi.org/10.36401/jqsh-22-x2

Novelli et al., 2024

Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: What it is and how it works. AI and Society, 39(4), 1871–1882. https://doi.org/10.1007/s00146-023-01635-y

Obermeyer et al., 2019

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://www.science.org/doi/10.1126/science.aax2342

Office of the European Union, 2024

Office of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act): Text with EEA relevance. http://data.europa.eu/eli/reg/2024/1689/oj

Pasquale, 2015

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. https://www.hup.harvard.edu/books/9780674970847

Rahwan, 2017

Rahwan, I. (2017). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(2018), 5–14. https://doi.org/10.1007/s10676-017-9430-8

Raji & Dobbe, 2023

Raji, I. D., & Dobbe, R. (2023). Concrete problems in AI safety, revisited [Preprint]. arXiv. https://arxiv.org/abs/2401.10899

Responsible Artificial Intelligence Institute, 2024

Responsible Artificial Intelligence Institute. (2024). Our responsible AI maturity model. https://www.responsible.ai/our-responsible-ai-maturity-model

Ribeiro et al., 2016

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13-17-August-2016, 1135–1144. https://doi.org/10.1145/2939672.2939778

Rohde et al., 2023

Rohde, F., Wagner, J., Meyer, A., Reinhard, P., Voss, M., Petschow, U., & Mollen, A. (2023). Broadening the perspective for sustainable AI: Sustainability criteria and indicators for artificial intelligence systems [Preprint]. arXiv. https://arxiv.org/abs/2306.13686

Rolnick et al., 2017

Rolnick, D., Veit, A., Belongie, S., & Shavit, N. (2017). Deep learning is robust to massive label noise [Preprint]. arXiv. http://arxiv.org/abs/1705.10694

Rolnick et al., 2023

Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., Ross, A. S., Milojevic-Dupont, N., Jaques, N., Waldman-Brown, A., Luccioni, A. S., Maharaj, T., Sherwin, E. D., Mukkavilli, S. K., Kording, K. P., Gomes, C. P., Ng, A. Y., Hassabis, D., Platt, J. C., … Bengio, Y. (2023). Tackling climate change with machine learning. ACM Computing Surveys, 55(2), 1–96. https://doi.org/10.1145/3485128

Rudin, 2019

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence (Vol. 1, Issue 5, pp. 206–215). Nature Research. https://doi.org/10.1038/s42256-019-0048-x

Russell & Norvig, 2021

Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson. https://www.pearson.com/en-ca/subject-catalog/p/artificial-intelligence-a-modern-approach/P200000003500/9780134610993

Russell, 2019

Russell, S. (2019). Human compatible artificial intelligence and the problem of control. https://doi.org/10.1007/978-3-030-86144-5_3

Saura & Debasa, 2022

Saura, J. R., & Debasa, F. (2022). Handbook of research on artificial intelligence in government practices and processes. IGI Global. https://www.igi-global.com/book/handbook-research-artificial-intelligence-government/279857

Schmidpeter & Altenburger, 2023

Schmidpeter, R., & Altenburger, R. (Eds.). (2023). Responsible artificial intelligence: challenges for sustainable management. Springer. https://link.springer.com/book/10.1007/978-3-031-09245-9

Shams et al., 2023

Shams, R. A., Zowghi, D., & Bano, M. (2023). AI and the quest for diversity and inclusion: A systematic literature review. AI and Ethics. https://doi.org/10.1007/s43681-023-00362-w

Siau & Wang, 2018

Siau, K. L., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53. https://www.researchgate.net/publication/324006061

Solove, 2025

Solove, D. J. (2025). Artificial intelligence and privacy. Florida Law Review, 77(1), 1–73. https://doi.org/10.2139/ssrn.4713111

Solow-Niederman, 2022

Solow-Niederman, A. (2022). Information privacy and the inference economy. Northwestern University Law Review, 117(2). https://scholarlycommons.law.northwestern.edu/nulr/vol117/iss2/1/

Tocchetti et al., 2022

Tocchetti, A., Corti, L., Balayn, A., Yurrita, M., Lippmann, P., Brambilla, M., & Yang, J. (2022). A.I. robustness: A human-centered perspective on technological challenges and opportunities [Preprint]. arXiv. https://arxiv.org/abs/2210.08906

Toreini et al., 2019

Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & van Moorsel, A. (2019). The relationship between trust in AI and trustworthy machine learning technologies [Preprint]. arXiv. http://arxiv.org/abs/1912.00782

UNESCO, 2022

UNESCO. (2022). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

van Wynsberghe, 2021

van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213–218. https://doi.org/10.1007/s43681-021-00043-6

Voeneky et al., 2022

Voeneky, S., Kellmeyer, P., Mueller, O., & Burgard, W. (Eds.). (2022). The Cambridge handbook of responsible artificial intelligence: Interdisciplinary perspectives. Cambridge University Press. https://doi.org/10.1017/9781009207898

Vorvoreanu et al., 2023

Vorvoreanu, M., Heger, A., Passi, S., Dhanorkar, S., Kahn, Z., & Wang, R. (2023). Responsible AI maturity model: Mapping your organization’s goals on the path to responsible AI (Microsoft White Paper). Microsoft. https://www.microsoft.com/en-us/research/uploads/prod/2023/05/RAI_Maturity_Model_Aether_Microsoft_whitepaper.pdf

Wachter & Mittelstadt, 2019

Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review (Vol. 2019). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829

Weller, 2017

Weller, A. (2017). Transparency: Motivations and challenges [Preprint]. arXiv. http://arxiv.org/abs/1708.01870

Wieringa, 2020

Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT), 1–18. https://doi.org/10.1145/3351095.3372833

Williams & Yampolskiy, 2024

Williams, H. M., & Yampolskiy, R. V. (2024). Understanding and avoiding AI failures: A practical guide [Preprint]. arXiv. http://arxiv.org/abs/2104.12582

Wirtz et al., 2024

Wirtz, B. W., Langer, P. F., & Weyerer, J. C. (2024). An ecosystem framework of AI governance. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford handbook of AI governance. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.24

World Health Organization, 2021

World Health Organization. (2021). Ethics and governance of artificial intelligence for health: WHO guidance. World Health Organization. https://www.who.int/publications/i/item/9789240029200

Yeung et al., 2019

Yeung, K., Howes, A., & Pogrebna, G. (2019). AI governance by human rights-centred design, deliberation and oversight: An end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI. Oxford University Press.

Zafar et al., 2015

Zafar, M. B., Valera, I., Rodriguez, M. G., & Gummadi, K. P. (2015). Fairness constraints: Mechanisms for fair classification [Preprint]. arXiv. http://arxiv.org/abs/1507.05259

Zhang et al., 2018

Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning [Preprint]. arXiv. http://arxiv.org/abs/1801.07593

Zowghi & Da Rimini, 2024

Zowghi, D., & Da Rimini, F. (2024). Diversity and inclusion in artificial intelligence. In Q. Lu, L. Zhu, J. Whittle, & X. Xu (Eds.), Responsible AI: Best practices for creating trustworthy AI systems (Chap. 11). Pearson Education. https://www.pearson.com/en-us/subject-catalog/p/responsible-ai-best-practices-for-creating-trustworthy-ai-systems/P200000010211/9780138073886