ABSTRACT
Trust is one of the most critical relations in our human lives, whether trust in one another, trust in the artifacts that we use everyday, or trust of an AI system. Even a cursory examination of the literatures in human-computer interaction, human-robot interaction, and numerous other disciplines reveals a deep, persistent concern with the nature of trust in AI, and the conditions under which it can be generated, reduced, repaired, or influenced. At a high level, we often understand trust as a relation in which the trustor makes oneself vulnerable based on positive expectations about the behavior or intentions of the trustee [1]. For example, when I trust my car to start in the morning, I make myself vulnerable (e.g., I risk that I will be late to work if it does not start) because I have the positive expectation that it actually will start. This high-level characterization is relatively unhelpful, however, particularly given the wide range of disciplines that have examined the relation of trust, ranging from organizational behavior to game theory to ethics to cognitive science. The picture that emerges from, for example, social psychology (i.e., two distinct kinds of trust depending on whether one knows the trustee's behaviors or intentions/ values) appears to be quite different from the one that emerges from moral philosophy (i.e., a single, highly-moralized notion), even though both are consistent with this high-level characterization. This talk first introduces that diversity of types of 'trust', but then argues that we can make progress towards a unified characterization by focusing on the function of trust. That is, we should ask why care whether we can trust our artifacts, AI, or fellow humans, as that can help to illuminate features of trust that are shared across domains, trustors, and trustees. I contend that one reason to desire trust is an "almost-necessary" condition on ethical action: namely, that the user has a reasonable belief that the system (whether human or machine) will behave approximately as intended. This condition is obviously not sufficient for ethical use, nor is it strictly necessary since the best available option might nonetheless be one for which the user lacks appropriate reasonable beliefs. Nonetheless, it provides a reasonable starting point for an analysis of 'trust'. More precisely, I propose that this condition indicates a role for trust as providing precisely those reasonable beliefs, at least when we have appropriately grounded trust. That is, we can understand 'appropriate trust' as obtaining when the trustor has justified beliefs that the trustee has suitable dispositions. As there is variation in the trustor's goals and values, and also the openness of the context of use, then different specific versions of 'appropriate trust' result as those variations lead to different types of focal dispositions, specific dispositions, or observability of dispositions, respectively. For example, in an open context (i.e., one where the possibilities cannot be exhaustively enumerated), the trustee's full dispositions will not be directly observable, but rather must be inferred from observations. This framework provides a unification of the different theories of 'trust' developed in different disciplines. Moreover, it provides clarity about one key function of trust, and thereby helps us to understand the value of (appropriate) trust. We need to trust our AI systems because that is a precondition for the ethical, responsible use of them.
- D. M. Rousseau, S. B. Sitkin, R. S. Burt, and C. Camerer (1998). Not so Different after All: A Cross-Discipline View of Trust. The Academy of Management Review, 23, 393--404.Google ScholarCross Ref
Index Terms
- The Value of Trustworthy AI
Recommendations
Assessing the influence of consumer perceived value, trust and attitude on purchase intention of online shopping
IC4E '18: Proceedings of the 9th International Conference on E-Education, E-Business, E-Management and E-LearningThe acceptance and usage of online shopping is influenced by numerous factors such as perceived value, trust, buyer's uncertainty which has constituted key hindrances to online transactions [22] This study investigates the effect of consumer perceived ...
Should I scan my face? The influence of perceived value and trust on Chinese users’ intention to use facial recognition payment
Highlights- We build a usage intention model of facial recognition payment from the perspectives of perceived value and trust.
AbstractWhile China has seen the widespread adoption of facial recognition payment, concerns over the potential risks impede the further growth of user acceptance. Drawing on the perspectives of perceived value and trust, we developed a ...
Determinants of behavioral intention to mobile banking
With the improvement of mobile technologies and devices, banking users are able to conduct banking services at anyplace and at anytime. Recently, many banks in the world have provided mobile access to financial information. The reason to understand what ...
Comments