Identifying Ethical Issues in Online Dating Apps

Published by

on

This post outlines my examination of ethical issues in Online Dating Apps, and potential solutions to these issues.

Task Characterization

In the contemporary digital landscape, dating apps have revolutionized the dynamics of romantic relationships, with an estimated 2.49 million adults in the UK alone engaging with these platforms. This digital revolution has not only transformed how we find partners but has also redefined the pathways to social and economic power that relationships traditionally offer. In modern society, being in a relationship, particularly within recognized legal and social frameworks, is often associated with a myriad of benefits—from the accumulation and transfer of wealth through combined incomes, shared investments, and inheritance, to the conferment of societal legitimacy and status that facilitates rights such as adoption.

The implicit power dynamics embedded in relationships thus underscore a critical ethical consideration: the role of dating apps in mediating access to these forms of power. By leveraging sophisticated machine learning (ML) algorithms to match individuals, dating apps wield significant influence over who gets to meet whom, potentially dictating the ease of access to relationships based on algorithmic preferences. This influence raises pressing questions about fairness, especially in the context of sensitive attributes such as race, age and religion, which should not serve as barriers to the social and economic benefits afforded by relationships. Despite this, many online dating platforms require new users to enter sensitive attributes about themselves to create a profile, and allow users to filter potential matches by race.

However we must reconcile with our own human behaviors when it comes to relationships. “Almost all characteristics, physical and psychological, in which humans are found to differ show some evidence of positive assortative mating.” according to this research article. Is it our place to disrupt this natural order when it comes to sensitive attributes such as race? Such preferences could be rightly constructed as racist, but on the other hand, who and what we find attractive is a profoundly private matter. There is no societal consensus on this question, and as such different online dating platforms have different policies.

Due to the proprietary nature of dating apps, the specific goals that the ML matching algorithms are optimized for is made purposefully murky. Regarding the non-hookup dating apps, the goal should be to find love for its users. Of course, if two users foster a lasting relationship, that is two less users using the app, and thus two less users who engage with the app, which means less people viewing ads and less subscriptions, leading to loss in profit. Companies are well aware of this; indeed the dating app “Hinge” plays with this reality in its slogan “Designed to be deleted”. Hinge developers have claimed that their algorithm is  “open” (but not open sourced) and is based on the “Nobel prize-winning algorithm called the Gale-Shapley algorithm”. Despite this, the Match Group (parent company of Hinge) is being sued in a class action lawsuit. “Match intentionally designs the platforms with addictive, game-like design features, which lock users into a perpetual pay-to-play loop that prioritizes corporate profits over its marketing promises and customers’ relationship goals” reads the complaint.

Identified Issues

  1. Racial Biases: The requirement for users to enter sensitive attributes, such as race, and allowing users to filter potential matches by these attributes, can disproportionately impact minority groups. This setup may perpetuate existing social biases and inequalities, potentially marginalizing these groups within the dating app ecosystem. Empirical studies support these observations. For instance, findings indicate that gay and bisexual men who are frequent users of intimate platforms tend to have a less positive view of multiculturalism and are more accepting of sexual racism. This attitude is attributed to the use of “simplified racial labels” in profile creation and within search and filtering mechanisms on these platforms. Researchers argue that such design elements promote the notion that simplified racial categories are effective, natural, or suitable for identifying individuals and determining sexual (dis)interest.
  2. Users Seeking Long-Term Relationships: Users genuinely seeking long-term connections might be disadvantaged by algorithms optimized for user engagement rather than meaningful matchmaking. If the algorithms prioritize keeping users on the platform over successful match outcomes, individuals desiring serious relationships may find the platform less effective.
  3. Economic Disparities Among Users: The platform exhibits an economic divide where users with the financial means to afford premium features gain access to potentially superior matching opportunities. Conversely, users with constrained financial resources may find themselves unable to utilize these enhanced features, diminishing their likelihood of finding a match in comparison to their more affluent counterparts.
  4. Fairness in User Autonomy: The question arises whether machine learning (ML) algorithms should act contrary to a user’s explicitly stated preferences. For instance, in 2016, the “CoffeeMeetsBagel” dating app faced scrutiny when it was found that its matching algorithm predominantly recommended potential partners of the same race to users, even those who had indicated no racial preference in their matches. This raises a critical issue of fairness: Is it morally acceptable for an ML algorithm to deduce preferences that a user has not expressed? While the algorithm’s behavior can be rationalized by the “assortative mating” phenomenon described earlier, whether it is fair to override a user’s stated preferences is debatable.

There are several other fairness issues one can raise regarding dating apps. However this discussion will be limited to the above four issues described in the interest of scope for this post.

Possible Ways of Addressing the Identified Issues

  1. Addressing Racial Bias: Dating apps allow users to search and filter by sensitive attributes, notably race. By including these design features, platforms allow users to act upon and / or reinforce their racial biases. Removing these search and filtering tools could be a potential solution, however this will impact a user’s autonomy. A deeper, more nuanced approach would be to introduce elements of “seamful design” into the sorting and filtering functions. “Seamful design accepts that technology has limits, and instead of disguising these limits to the user, ‘pulls back the curtain’ to give the user greater understanding of how systems works.”. For instance, when users attempt to apply racial filters in their search criteria, the app could present a notification or an interactive element that explains the impact of these choices. This could include information on how such filters may perpetuate biases and limit the diversity of potential matches. This form of seamful design not only educates users about the underlying system’s workings but also prompts them to consider the broader consequences of their preferences on inclusivity and fairness. This solution promotes awareness among users and encourages them to be more mindful. A potential downside to this solution would be a possibility of overwhelming the user with information, and the chance that users simply ignore the information and continue to use the sorting and filtering functions as before. Some dating apps are moving away from categorizing users by sensitive characteristics, and instead creating their own attributes to characterize their users. For example, the Japanese homosexual dating app “9MONSTER” categorizes users into different animal categories, e.g “Chubby Piggy”, “Muscle Wolf”, “Slim Cat”.
  2. Addressing Users Seeking Long-Term Relationships: A potential solution could involve the development and implementation of a dual-algorithm system within dating apps, specifically designed to cater to users seeking long-term relationships. This system would operate alongside the existing algorithm but with an optimization focus on compatibility and long-term match potential rather than mere engagement. Users could opt into this mode, signalling their preference for serious matchmaking, which would adjust the algorithm’s parameters to prioritize factors associated with lasting relationships. This approach allows for the coexistence of diverse user goals within the same platform. There are issues with this solution though, mainly that creating an algorithm capable of predicting long-term matches would be very complex, and would require substantial investment to build, which is unlikely to be forthcoming due to the lack of monetary incentive for dating apps.
  3. Addressing the Economic Disparities among Users: A viable solution to address economic disparities among users involves introducing a tiered-access model that adjusts based on user activity and engagement rather than purely financial transactions. This model would allow users who demonstrate high levels of engagement or contribute positively to the community (e.g., through content creation, active participation, or helping others) to access premium features without direct payment. Essentially, it rewards users for contributing to the vibrancy and inclusivity of the platform, levelling the playing field for those with limited financial resources. This approach democratizes access to premium features, reducing the economic divide and potentially increasing the diversity and richness of interactions on the platform. However, it requires careful calibration to ensure that the criteria for earning premium access are clear, fair, and not easily gamed, while also maintaining the platform’s financial viability. Whilst no dating app has implemented this type of solution, an example could be Duolingo which uses a digital currency called “lingots”, which users earn through completing lessons or achieving certain streaks, thus encouraging engagement and progression through educational content.
  4. Addressing Fairness in User Autonomy: Whether it is right for the ML matching algorithm to override a user’s stated preference to combat associative mating practices is a complex ethical dilemma. There’s a compelling argument for intervention, as highlighted by research from OkCupid co-founder Christian Rudder, which demonstrates that our intimate preferences are malleable, influenced both by the options available to us and our interactions with unexpected matches. By introducing users to new and diverse experiences, it’s possible to consciously (or unconsciously) shift biases and mitigate the effects of assortative mating. An approach could involve the ML algorithm occasionally suggesting profiles that partially or entirely diverge from a user’s specified preferences, with the goal of gradually encouraging more openness in user preferences over time. This solution however will frustrate users, who may complain that their stated preferences are not being acknowledged by the algorithm. It may also make these suggested profiles more open to racial abuse, as they may be recommended to racially prejudiced people.

This exploration into the fairness of online dating apps highlights the critical need for ethically designed machine learning algorithms that address racial biases, economic disparities, and user autonomy. By advocating for solutions such as “seamful” design and dual-algorithm systems, this post aims to foster a more inclusive and equitable digital dating environment.

Leave a comment