Recommendation systems are powerful revenue engines. Different types and logics exist but a lot of them rely strongly on personal data. Internet users and regulators exert growing pressures on how personal data is used which curb the ability of platforms to collect them. Recent research investigated how perceived usefulness could play a significant role in the consent of the users to share their personal data. I review the research and discuss the implications for strategy.
The research results
The research: Perceived usefulness: A silver bullet to assure user data availability for online recommendation systems. By Mican & al (2020)
Analysing 4 main types of recommendation systems (RS) and 5 categories of data collected, this study concludes that the more useful the recommendation system is perceived, the more users accept to share their data. In addition, it is concluded that the more users accept to share data, the more relevant the recommendation system is perceived.
the more useful the recommendation system is perceived, the more users accept to share their data.
Authors also conclude that users’ acceptance of RSs collecting and storing various types of data is not equally influenced by all types of RS.
-
the perceived usefulness of RS based on personal online shopping behavior impacts massively the consent of users to share their likes/dislikes data and their personal shopping behavior data.
-
the perceived usefulness of RS based on others’ online shopping behavior impacts massively the consent of users to share data on friends/ acquaintances’ shopping behavior and on the user’s personal/sensitive information (which is more surprising).
-
the perceived usefulness of RS based on special offers or based on general trends impacts massively the consent of users to share their reviews/ratings/interests data and their friends’ and acquaintances’ shopping behavior data.
What does it mean
-
a new illustration of the flywheel effect of data collection (see: The 3 roles data plays in platform): better (i.e. more meaningful and relevant) recommendations lead to users willingly sharing more data with developers, which in turn leads to an improved system and even more user data
-
focusing in the perceived usefulness is critical to incentivise user to share their data
A bit of critic
It’s a very interesting study which bridges a gap in understanding better the subtle relations between types of RS / types of data and between perceived usefulness, consent and perceived relevance.
The authors conclude: “Users should, therefore, be notified that the greater the amount of personal data a recommender collects, the more accurate and relevant its recommendations. Further, the experience will allow users to understand that the exchange of their data for relevant recommendations is a fair one.”
This part is for me the most arguable: it’s not because a recommender system is perceived as useful that it’s fair. It’s not because users feel the recommendations are relevant to them that they are fair.
it’s not because a recommender system is perceived as useful that it’s fair.
Focusing on perception hampers discussing the main issue with recommendation systems: the asymmetry of information between the users and the platform and the practices used by the platform to manipulate the perceptions of their users for their own and exclusive benefit.
Useful questions for strategy making
- recommendation systems are powerful revenue engines and we know more and more about their types and logic => which systems are you using? Do you assess their performance against other systems (eg: user behaviour vs general trends)?
- the majority of personalized recommendation systems heavily rely on personal data => what is your strategy and how you stand: design systems to increase the data collection? using recommendation systems that use no personal data (eg: experts recommendation)? letting the user decide on the level of data you use?