The use of artificial intelligence models has recently grown common; we may use them to write lines of code for us, summarize readings, draft emails, or even illustrate images. But when it comes to important decisions we need to make, such as choosing between job offers or implementing certain economic policies, our level of confidence and trust in AI falls. This raises an intriguing point of exploration which I tackle in this paper - What would need to happen for people to trust artificial intelligence for important decisions? In this paper, I elaborate on how trust in AI for high-stake decisions would be accomplished if the technology was anthropomorphized because its anthropomorphism would overcome psychological barriers that are necessary to overcome for us to trust AI for important decisions.
View on arXiv