How should ai systems talk to users when collecting their personal information? effects of role framing and self-referencing on human-ai interaction
AI systems collect our personal information in order to provide personalized services, raising privacy concerns and making users leery. As a result, systems have begun emphasizing overt over covert collection of information by directly asking users. This poses an important question for ethical interaction design, which is dedicated to improving user experience while promoting informed decision-making: Should the interface tout the benefits of information disclosure and frame itself as a help-provider? Or, should it appear as a help-seeker? We decided to find out by creating a mockup of a news recommendation system called Mindz and conducting an online user study (N=293) with the following four variations: AI system as help seeker vs. help provider vs. both vs. neither. Data showed that even though all participants received the same recommendations, power users tended to trust a help-seeking Mindz more whereas non-power users favored one that is both help-seeker and help-provider.
Files
Metadata
Work Title | How should ai systems talk to users when collecting their personal information? effects of role framing and self-referencing on human-ai interaction |
---|---|
Access | |
Creators |
|
Keyword |
|
License | In Copyright (Rights Reserved) |
Work Type | Article |
Publisher |
|
Publication Date | May 7, 2021 |
Publisher Identifier (DOI) |
|
Deposited | October 14, 2024 |
Versions
Analytics
Collections
This resource is currently not in any collection.