System2 AI is an ethical machine learning recommendation system, that helps users become who they want to be. It is a proof of concept, to show how larger companies can change their machine learning recommendation algorithms to help people become more well rounded and balanced in their beliefs, while still giving their users content that they find engaging.
Currently machine learning recommendations systems weigh user actions more or less equally. Since many user actions are automatic emotional, known as system 1 behaviors, social media echo chambers are created. These reinforce user’s existing beliefs and leads to arguments, both online and in real life, where neither party has any chance of changing their views, or reaching a compromise.
System2 AI uses user metrics, such as user reaction time, time to read/watch, and whether the post is in a novel topic, to identify what actions are more personally meaningful to the user, and the machine learning system learns to value these actions more, just as a human would. The user behavior that reflects these more meaningful actions, known as system 2 behaviors, receive greater importance to the machine learning algorithm.
The proof of concept is a news website aggregator and ethical recommender, and is made with standard web technologies, while the machine learning recommendation system is made with Python and PyTorch.
What inspired you (or your team)?
My mom, classmates have experienced the effects of social media echo-chambers. After recent political events, their social media feeds became very partisan. Even if my mom or classmates wanted to get a more balanced social media feed, in terms of politics/beliefs/posts, they couldn’t – because of how the machine learning recommendation systems that power sites like Facebook work.
I wanted to create a new system, that can still intelligently recommend items and posts, but also help users, like my mom and my classmates, achieve their aspirations to become more well-rounded and balanced people in terms of their beliefs.