OPEN APP
Home >Opinion >Columns >Machine behaviour may be easier to alter than our own
Listen to this article

In a paper titled ‘Fairness Through Awareness’ published in 2011, researchers at IBM, Microsoft and the University of Toronto had underlined the importance of preventing algorithmic discrimination. Back then, algorithmic bias was an esoteric concern, and deep learning had not become mainstream. But in recent years, from racist Twitter bots and unfortunate Google search results to software systems that are found to be biased against people of colour while assessing risks of recidivism have highlighted the importance of mitigating biases in artificial intelligence (AI) algorithms. As algorithmic decision-making becomes an integral part of our social systems, the phenomenon of algorithmic biases will soon become a significant policy issue.

AI algorithms don’t have the power of the human mind to distinguish right from wrong. The data that we feed into these algorithms ultimately determine the course an algorithm takes. Present attempts to get rid of biases in AI algorithms have ranged from simple strategies like masking the features that lend to biased outcomes and diversifying or re-sampling their training data sets to more complex strategies like using separate algorithms to classify different groups of people instead of using the same measures for everyone. Strategic methods like hiring more diverse teams to work on AI projects and giving algorithms greater transparency and interpretability have also been deployed. But these strategies have not yet helped achieve a satisfactory level of algorithmic neutrality.

Various attempts have long been made to tackle the problem of biases across society. Jennifer Eberhardt’s book, Biased: The New Science of Race and Inequality, reminds us that racial-zoning regulations in several US cities that forbade African-Americans to move into Caucasian neighbourhoods were outlawed by the US Supreme Court back in 1917. But even today, residues of those discriminatory practices linger on.

Airbnb, a space-rental app that is among the stars of the online economy, aims to foster the ideal that “every community is a place where you can belong". But the company has found that racial discrimination is its greatest challenge in realizing this mission. Guests who belong to minority groups sometimes feel that they are being discriminated against by hosts who decline their booking requests. To solve this problem, Airbnb made it mandatory for all its users to sign a commitment to uphold racial equality as a core value. That did not have the desired effect. Airbnb then introduced the concept of ‘instant booking’, by which the guest could book an accommodation unit without prior approval of the host. Only a tiny fraction of travellers, about 3%, opted for this provision. African-Americans were found to be even more reluctant to use this option than other groups. This was because they wanted to avoid unpleasant surprises when they met the host in person. Why did these persistent attempts to solve the problem of biases fail?

As leading psychologists Mahzarin R. Banaji and Anthony G. Greenwald point out in their book Blindspot: Hidden Biases of Good People, the human brain houses biases that are hidden in its non-conscious processes. These implicit biases impact one’s everyday decisions without one being consciously aware of it. Expecting an individual to be conscious of these biases and change them might not be easy. Several generations of human evolution may have to work on behavioural improvement before a real change is visible.

It is clear that many of the algorithmic biases that the global AI industry is trying to tackle relate to prejudices that have been ‘baked’ into social attitudes for a long time. If the world is biased, its historical data will also be biased. AI algorithms that ‘learn’ from this data are then bound to be biased as well. So, as much as we use strategies to mitigate the problem of algorithmic biases at a rational level, AI professionals should look for new ways to solve this problem at an implicit level.

The idea is not to fight a bias head on, but try to counter-balance it. Let’s assume that we are evaluating the credit worthiness of an individual. Lessons from behavioural science would remind us that there are many behavioural factors that could help gauge a person’s creditworthiness. For example, a person with a growth mindset will have a higher chance of succeeding in life. A person who displays grit under adversity will be in a better position to deal with setbacks in life. The creditworthiness of a person can often be traced back to these and other such relevant behavioural traits. They can help expose the positive side of a borrower’s ability and willingness to pay loans back. As these behavioural factors are mostly implicit in nature, their introduction in credit markets can be an effective strategy to counter the negative effect of biases in traditional AI-training data.

Much work will have to done to implement this new strategy of combining implicit behaviour data with traditional data. The biggest challenge will be to identify the non-conscious behavioural drivers of a person’s creditworthiness. The second challenge will be how to capture the data that correctly represents these implicit behavioural factors. But this difficult task would be worth the effort. Once AI systems are seen as becoming more fair, their acceptance will increase. This strategy is also a reminder that to make this world a better place, instead of waiting endlessly for humans to change their undesirable patterns of behaviour, it might be much easier to train machines to behave in a more responsible manner.

Biju Dominic is the chief evangelist, Fractal Analytics and chairman, FinalMile Consulting

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

Close
×
Edit Profile
My ReadsRedeem a Gift CardLogout