logo

An AI solution to AI bias

For ages, we have faced issues of bias and discrimination by humans. Now, we have to bother about AI being biased as well

An AI solution to AI bias

Let's take a trip down the memory lane. Rewind to the good old school days. Most of us would remember attending value education classes in school. It is here that one was formally introduced to notions of diversity and inclusion. For many, first exposure to Constitutional principles would definitely have been at school only. Be it the dedicated page on the Preamble in most NCERT books or chapters discussing Fundamental Rights, Fundamental Duties, and Directive Principles of State Policy in senior years, we were taught about Constitutional ideals that need to be upheld and respected. Now fast forward to the present. We are in the age of the Fourth Industrial Revolution, face to face with Artificial Intelligence (AI). Many feel that AI has jumped straight out of science fiction books. But, there is something to ponder amidst the euphoria. Are the important values and Constitutional ideals, that we hold so close to our hearts, in danger due to AI?

Well, for instance, take the often quoted 2016 study by ProPublica, where a computer program (Correctional Offender Management Profiling for Alternative Sanctions-COMPAS) was found to be twice as likely to falsely point to black defendants as being at a higher risk to commit future crimes as compared to their white counterparts. What makes this even more dangerous is the fact that courts in the US rely on it for risk assessment. To drive home the point and highlight problems with AI, University of Melbourne and Science Gallery, Melbourne developed the Biometric Mirror. It is an AI that uses a person's photograph to rate one's physical and personality traits. This is done by comparing the photograph with a database consisting of 10,000 people's opinion of different facial appearances. It then lists 14 characteristics of the person such as age, race, and even the perceived attractiveness. The database consists of subjective opinion and through it, the researchers aimed to demonstrate unethical discrimination and bias by AI.

For a long time, we have been concerned with issues of bias and discrimination by humans. Well, now we have to bother about AI being biased too. As scholars have pointed out, it is the biased and unrepresentative data used for training AI that leads to biased results. This is raising ethical concerns all over the world. These concerns are equally valid for a country like India. With Government and several of its agencies moving to use AI for governance in India, many are strongly arguing for safeguards. This is important for at least three reasons. First, India is proud of being a home to a diverse population of different religions, castes, and cultures; all living together. At the same time, we are trying to tackle illiteracy, poverty, and social disparity. Then there are vulnerable sections like minority communities, people belonging to certain castes, women, third gender, children, and differently abled. Experts in India are increasingly flagging the issue of bias in AI and how its danger for our diverse population. Second, as per the Indian Constitution, the State has to respect Fundamental Rights. Be it the Right to Equality (Articles 14-18), Right to Freedom of religion (Articles 25-28), or Cultural and Educational Rights (Article 29-30), all need to be upheld. If the Government or its agency falling under the definition of 'State' uses an AI for its functions, it has to ensure that these Constitutional provisions are not violated due to a bias. Third, there is no clarity on laws that will regulate AI and the liability mechanism, since AI is still in the early stages. So what do we do?

We need not look too far to find a solution. Recognising this issue, the recent Centre for Internet & Society Report titled 'Artificial Intelligence in the Governance Sector in India' provides useful recommendations. It says: "Given that the governance sector has potential implications for the fundamental rights of all citizens, it is also imperative that the government does not shy away from its obligation to ensure the fair and ethical deployment of this technology while also ensuring the existence of robust redress mechanisms. To do so, it must chart out a standard rules-based system that creates guidelines and standards for private sector development of AI solutions for the public sector." More specifically, it has recommended that "Primary accountability for any use of AI sanctioned by the State should lie with the government itself. There must be a cohesive and uniform framework that regulates the partnerships, which the government enters into with the private sector." Further, it points out that getting private program developers to follow Constitutional standards for AI used by the Government is going to be a key challenge. To ensure representative data, the Report suggests developing standards for data curation so that data reflects India's socio-economic reality.

Let's look at what the private sector developers are doing about the issue of bias in general. To begin with, they are aware of the issue. Not stopping with recognising the problem, they are working towards finding ways to check it. To identify bias, Facebook has developed a tool called Fairness Flow which automatically warns if an unfair judgement, based on a person's race, gender or age, is being made by an algorithm. Microsoft is another company that is working on a tool that can help detect bias in AI algorithms automatically. Even though the team working on it says that it can't find all instances, it is still a step in the right direction.

At a collective level, the 'Partnership on AI' was founded in 2016 by Apple, Amazon, DeepMind/Google, IBM, Microsoft, and Facebook. Its website gives further details on its mission: "In support of our mission to benefit people and society, the Partnership on AI intends to conduct research, organize discussions, share insights, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advances the understanding of AI technologies including machine perception, learning, and automated reasoning." It has listed out four goals namely: 'develop and share best practices', 'advance public understanding', 'provide an open and inclusive platform for discussion & engagement' and 'identify and foster aspirational efforts in AI for socially beneficial purposes'. The Partnership has identified six thematic pillars to work on. These include the application of AI in safety-critical areas such as healthcare, working towards fair, transparent and accountable AI, discussing the impact of AI on jobs and economy, collaborations between people and AI, studying the impact of AI on society like influence on privacy and criminal justice, and finally using AI for social good. In a short span of time, this partnership has expanded to include other companies, academics, and NGOs. A recent entrant to the group is UNDP.

These are welcome steps. With greater awareness among the private sector developers about such issues with AI, one hopes that more would come forward to develop fair AI tools. Simultaneously, the Government, at its end, can ensure safeguards especially when it relies on privately developed AI for governance. Such combined efforts can ensure that AI helps eliminate human bias instead of adding any more.

We have fought hard to protect principles of equality and fairness in the country. Now it's time to ensure that these celebrated principles of the human world are also translated in the AI world.

(The author is a lawyer and currently a Young Professional with Economic Advisory Council to the Prime Minister and NITI Aayog. The views expressed are strictly personal)

Aparajita Gupta

Aparajita Gupta

Our Contributor help bring you the latest article around you


Exclusive

View All

Latest News

View All
Share it
Top