Trust and Trustworthiness in AI Ethics
In her paper "Trust and Trustworthiness in AI Ethics", Karoline Reinhardt discusses the growing conversation about trust in artificial intelligence, as AI becomes more common in the society around us. Following the introduction of guidelines for trustworthy AI in 2019, the concepts of trust and trustworthiness have taken center stage in ethical discussions.
Reinhardt analyses how different ethical guidelines define and approach trust, finding that while there is general agreement on the need for trustworthy AI, the exact meanings of trust and trustworthiness are unclear and often inconsistently applied. She points out that trust is frequently seen as something positive to promote, whereas the complexities and potential downsides are often overlooked.
One of the key issues highlighted is the way guidelines focus on trust building from the perspective of developers, neglecting the role of users or the general public in this dynamic. This is a core issue addressed in the AI-PROGNOSIS project – it is crucial that we strive to develop ethical AI systems that take into account the balance between the human experience and AI technology.
Reinhardt argues that trust should not be seen solely as a tool to achieve benefits from AI but rather a complex relationship that involves vulnerability.
Her paper also emphasises the need for future guidelines to acknowledge that trust can be ambivalent and that excessive trust in AI can be dangerous. Reinhardt suggests that instead of merely striving for more trust in AI, we should develop mechanisms that encourage skepticism and critical evaluation, helping prevent potential misuse or overreliance on AI systems.