Securing a loan has always been a challenging process. Until the 1950s, the only obstacle borrowers had to overcome was to convince bankers that they were trustworthy loan recipients. The bankers’ judgment would be based on the applicant’s credit, capital and character. However, other factors that would often influence bankers’ decisions were information on an individual’s social, political and even sexual life. Considering all these observations, the banker would make his subjective verdict. 

There’s no need to point out how much human biases would affect such decision making. The 1950s banker more likely would grant a loan to someone from his network or someone at least in some way resembling the values he holds. 

In 1956 engineer Bill Fair and mathematician Earl Isaac created a standardised credit scoring system that revolutionised banking. The credit scoring system, known as FICO, is still widely used in the banking industry. The system provides a number between 300 and 850, determined by payment history, debt length, types of credit and recent credits

With the new FICO system, it was believed that a more impartial system would replace human subjectivity in the loan application process. However, in some respect, FICO complimented and inflated unfairness prevalent in the banking industry throughout the decades. The FICO system did nothing to open banking to a broader range of people. It simply employed fairness through unawareness, ignoring the loan applicants’ sensitive attributes such as race or sex. But such an approach is not enough to fix the issues occurring from the structural biases that affect minority groups’ chances of securing loans. 

Nevertheless, the FICO system, which is often viewed as objective regarding the decision-making process, still facilitates discrimination against certain segments of society. Being unbanked often correlates with low paying jobs, lack of education, and poor neighbourhoods, which, as we historically know in the U.S., are closely related to race. The U.S. national survey found that minorities make up the biggest part of unbanked people, with over 20 percent of Black and 19 percent of Hispanic households around the country having no credit or banking history. In comparison, only three percent of unbanked households in the U.S. are white. Another systematic disparity between White/Asian and Black/Hispanic loan applicants occurs in relation to interest rates, whereby the latter when they get a chance to secure a loan, are offered worse deals than the former.  

Globally there are 2.5 billion unbanked people, and only less than one half of the banked population is regarded as eligible for lending. With the current credit scoring system, this part of the population has no chance of improving their financial status as the system only awards people with a satisfactory FICO score and does not provide an opportunity to potentially trustworthy loan applicants who may not have a credit score to confirm it, such as students, entrepreneurs of promising business or foreign residents. A more inclusive credit scoring system could incorporate factors such as the current level of income, employment opportunities, or potential ability to earn. 

Social scoring as an alternative

No wonder in less developed countries, where more people have poor or non-existent credit scores, different alternatives have been found. One such alternative is to use applicants’ social networks to determine their creditworthiness. It might be the only means to secure a loan for the unbanked or people with poor or non-existent credit history. However, this does not mean social scoring is a fair practice, and unfortunately, again, the ones most affected by such systems are the poor and minority groups. 

According to a study on social scoring with social network data, the system determines whether the applicant is eligible for a loan according to their social connections. The system runs on a belief that the same feather birds flock together. Most would agree that such folk science should not be held at face value when deciding on peoples’ creditworthiness. Unfortunately, that is not the case for social scoring practices, where applicants are judged according to their connections on social media platforms. As with the credit scoring methods mentioned earlier, one of the shortfalls is that poor people will be penalised as they are more likely to be connected with people considered less creditworthy. Once again, people are being judged by the stereotypes of their peers. While such a practice would not be legal in the EU, it is tolerated in other jurisdictions.

Another interesting finding in the study revealed that the quantity of the social network also plays an important part. The study found that people with more extensive social networks would be considered more creditworthy, while people with smaller social networks would be viewed as less likely to repay their loans. 

It is not hard to imagine how such credit scoring would impact in-person relationships, where connections would be made to boost one’s creditworthiness. 

An attentive reader could notice the comparisons between the primitive early days’ decision-making regarding loans and social network credit scoring. Both heavily rely on personal connections and social statuses and can be seen as highly subjective and burdensome on people from minority groups. 

The problem with fairness metrics 

Knowing all the details we know so far, we can spot the flaws in all three systems used in credit scoring. To start with, they all are heavily reliant on social status leading to discrimination against minority groups. One approach that could solve some of the issues in credit scoring is to use different fairness metrics. Ever since the loan granting business took off, individual fairness has served as a template for decision-makers. Individual fairness is based on the belief that similar people should get similar outcomes. If the credit scoring system moved more towards group fairness, which implies that different groups should be treated similarly, the banking industry would be closer to fixing their bygone systems.

However, group fairness has its shortcomings. Firstly, there are three different group fairness metrics, and they all come with distinct attributes. The first group fairness metric is Demographic Parity which proclaims that everyone should have an equal opportunity to obtain a favourable outcome despite their group membership. Demographic Parity often comes along with the four-fifths rule, which means there should be four-fifths or 80 percent of minority representatives for every majority. In the loan application process, this would mean that for every ten loans to applicants from the majority group, there should be eight loans approved for people from minority groups. However, there are shortfalls when choosing Demographic Parity – one very important one is that this metric can be achieved despite qualifications, which could lead to not selecting the best-qualified applicants. In the credit scoring example, this could mean granting loans to people who cannot pay them back, which could lead to another global financial crisis. The 2008 financial crisis was partly caused by low-lending standards that allowed millions of people to borrow money beyond their financial capabilities who then fell behind in paying them back. 

Another group fairness metric – Equalised Odds, states that predicted outcomes and protected attributes are independent, meaning that candidates from both minority and majority groups are as likely to get loans as long as they qualify for them. The solution would be to grant loans to an equal proportion of qualified loan applicants from both groups. Of course, Equalised Odds does not solve the problem of structural biases, which leads to less qualified loan applicants from minority groups. The third fairness metric – Predictive Rate Parity, suggests that none of the applicant’s sensitive characteristics should be taken into account, and the predictive score should be concentrated on the applicant’s ability to perform tasks, in this case – pay back the loan. However, similarly to Equalised Odds metric, this approach could not solve the representational gap between the social groups, resulting from structural biases where some social groups may have restricted access to resources such as pursuing higher education. 

Identifying a suitable fairness metric might seem like a solution to fix inequality. However, different fairness metrics suggest different approaches to the same problem. None of the fairness metrics has yet found a way to distribute equal opportunities to everyone, primarily due to the long-ingrained structural biases which unfortunately still define our societies. Choosing the correct fairness metric is sort of a philosophical thought experiment that challenges data scientists, ethicists and policymakers. 

To finish up, I want to take a look at the full scope of unfair practices and their effect on people’s lives. The poor credit score often bleeds into other parts of life, creating a loop that poor people can find impossible to escape from. In 2015 secrets of car insurance prices in the U.S. were disclosed. The investigation published by the Consumers Reports found that drivers with drunk driving convictions were given cheaper insurance premiums than drivers with poor credit scores. Such insurance companies’ calculations come from the concept that poor drivers are more expensive to insure as they are more likely to file an insurance claim or have a claim filed against them. This is one of the many examples displaying our society’s division into castes of “losers” and “winners”. Unfortunately, removing the “loser” tag is a difficult task. We are currently standing at a crossroads concerning AI and its design, where we can still decide whether we will make this task harder or easier for people. 

Idiro AI ethics centre

Please get in touch with us if you have an interesting take on the subject or would like to share your thoughts with us. If you work in AI or ethics (or both), we’d love to speak with you to hear your opinion.

And if you are interested in reading more about AI ethics, please subscribe to our mailing list.

Recommended Posts