[1] B. Shneiderman. Human-centered artificial intelligence: Three fresh ideas. In AIS Transactions on Human-Computer Interaction, volume 12, pages 109--124, 2020.
[2] B. Friedman. Value-sensitive design. interactions, 3(6):16--23, 1996.
[3] Iason Gabriel and Vafa Ghazavi. The challenge of value alignment: From fairer algorithms to ai safety. Minds and Machines, 31(4):629--653, 2021. [ DOI ]
[4] Luke Munn. The uselessness of ai ethics. AI and Society, 2023. [ DOI ]
[5] R. Baeza-Yates. Bias on the web. Communications of the ACM, 61(6):54--61, 2018.
[6] Association for Computing Machinery. Acm code of ethics and professional conduct, 06 2018. [ http ]
[7] Alessio Malizia and Fabio Paternò. Why is the current xai not meeting the expectations? Communications of the ACM, 66(12):20--22, 2023. [ DOI ]
[8] Elettra Bietti. From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. Philosophy & Technology, 33(4):541--559, 2020. [ DOI | http ]
[9] Mariarosaria Taddeo and Alexander Blanchard. Accepting moral responsibility for the actions of autonomous weapons systems—a moral gambit. Philosophy & Technology, 35(3):1--24, 2022. [ DOI ]
[10] Norman Daniels. Reflective equilibrium. https://plato.stanford.edu/archives/sum2020/entries/reflective-equilibrium/, 2020. Stanford Encyclopedia of Philosophy, Summer 2020 Edition, edited by Edward N. Zalta.
[11] Tom L. Beauchamp and David DeGrazia. Principles and principlism. In Raanan Gillon, editor, Principles of Health Care Ethics, pages 55--66. John Wiley & Sons, 2004.
[12] Geoffrey Sayre-McCord. Metaethics. https://iep.utm.edu/metaethi/. Internet Encyclopedia of Philosophy.
[13] Susan J. Ashford. Developing as a leader: The power of mindful engagement. Organizational Dynamics, 41(2):146--154, 2012.
[14] Paul Formosa, Michael Wilson, and Deborah Richards. A principlist framework for cybersecurity ethics. Computers & Security, 105:102226, 2021. [ DOI ]
[15] Kevin Macnish and Jeroen van der Ham. Ethics in cybersecurity research and practice. Technology in Society, 63:101382, 2020. [ DOI ]
[16] Sebastian Sequoiah-Grayson. The unsatisfiable triad: A problem for automated decision making. Unpublished manuscript, 2025. [ http ]
[17] Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017. [ http ]
[18] Giandomenico Cornacchia, Vito Walter Anelli, Fedelucio Narducci, Azzurra Ragone, and Eugenio Di Sciascio. Counterfactual reasoning for bias evaluation and detection in a fairness under unawareness setting, 2023. [ arXiv | http ]
[19] Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. The (im)possibility of fairness: different value systems require different mechanisms for fair decision making. Commun. ACM, 64(4):136–143, March 2021. [ DOI | http ]
What does it mean to be fair?
[20] R. Manna and R. Nath. Kantian moral agency and the ethics of artificial intelligence. Problemos, 100:139--151, 2021.
[21] R. Nath and V. Sahu. The problem of machine ethics in artificial intelligence. AI & Society, 35:103--111, 2021.
[22] R. Tonkens. A challenge for machine ethics. Minds & Machines, 19:421--438, 2009.
[23] L. Singh. Automated kantian ethics: A faithful implementation, 2022. Online at https://github.com/lsingh123/automatedkantianethics.
[24] European Commission’s High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy artificial intelligence. Technical Report 6, European Commission, 2019. p. 17.
[25] J. Fjeld, N. Achten, H. Hilligoss, A. C. Nagy, and M. Srikumar. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for ai. arXiv preprint arXiv:2009.06350, 2020.
[26] M. M. Bentzen and F. Lindner. A formalization of kant's second formulation of the categorical imperative, 2018. CoRR abs/1801.03160. [ arXiv | http ]
[27] John R. Searle. Minds, Brains, and Science. Harvard University Press, Cambridge, 1996. See p. 41.
[28] Tom M. Powers. Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4):46--51, 2006.
[29] Immanuel Kant. Fundamental Principles of the Metaphysic of Morals. Prometheus Books, New York, 1785/1988.
[30] Immanuel Kant. Fundamental Principles of the Metaphysic of Morals. Prometheus Books, New York, 1785/1988.
[31] Christopher Bennett. What Is This Thing Called Ethics?, chapter 4--6. Routledge, London, 2015. Chapters on Utilitarianism, Kantian Ethics, and Aristotelian Virtue Ethics.
[32] I. Ahmed, M. Kajol, U. Hasan, P. P. Datta, A. Roy, and M. R. Reza. Chatgpt versus bard: A comparative study. Engineering Reports, 6(11):e12890, 2024.
[33] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, and D. Amodei. Language models are few-shot learners. In Advances in neural information processing systems, volume 33, pages 1877--1901, 2020.
[34] R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H. T. Cheng, and Q. Le. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022.
[35] H. Alkaissi and S. I. McFarlane. Artificial hallucinations in chatgpt: implications in scientific writing. Cureus, 15(2), 2023.
[36] M. Chelli, J. Descamps, V. Lavoué, C. Trojani, M. Azar, M. Deckert, and C. Ruetsch-Chelli. Hallucination rates and reference accuracy of chatgpt and bard for systematic reviews: comparative analysis. Journal of medical Internet research, 26:e53164, 2024.
[37] T. G. Heck. What artificial intelligence knows about 70 kda heat shock proteins, and how we will face this chatgpt era. Cell Stress and Chaperones, 28(3):225--229, 2023.
[38] S. A. Athaluri, S. V. Manthena, V. K. M. Kesapragada, V. Yarlagadda, T. Dave, and R. T. S. Duddumpudi. Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through chatgpt references. Cureus, 15(4), 2023.
[39] M. Nasr, N. Carlini, J. Hayase, M. Jagielski, A. F. Cooper, D. Ippolito, and K. Lee. Scalable extraction of training data from (production) language models. arXiv preprint arXiv:2311.17035, 2023.
[40] Shalva Kikalishvili. Unlocking the potential of gpt-3 in education: opportunities, limitations, and recommendations for effective integration. Interactive Learning Environments, 32, 2023.
[41] Anaïs Tack and Chris Piech. The ai teacher test: Measuring the pedagogical ability of blender and gpt-3 in educational dialogues. 2022.
[42] Aditi Kavia and Kumari Simran Sharma. Chat gpt and copyright: Legal and ethical challenges. July 2023.
[43] P. P. Ray. Chatgpt: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3(1):121--154, 2023.
[44] Anusha Sumbal, Ramish Sumbal, and A. Amir. Can chatgpt-3.5 pass a medical exam? a systematic review of chatgpt's performance in academic testing. Journal of medical education and curricular development, 11, 2024.
[45] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C.L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A.K. Ray, J. Schulman, J.K. Hilton, F. Kelton, L.P. Miller, M. Simens, A. Askell, P. Welinder, P.F. Christiano, J. Leike, and R.J. Lowe. Training language models to follow instructions with human feedback. arXiv (Cornell University), 2022.
[46] Association for Computing Machinery. Acm code of ethics and professional conduct, 2018. [ http ]

This file was generated by bibtex2html 1.99.