MillenniumPost
World

Yes, AI robots too can have biases like racism & sexism

Boston: Artificially intelligent (AI) machines can easily learn racism and sexism from each other, say scientists who found that showing prejudice towards others does not require a high level of cognitive ability.

Scientists from Massachusetts Institute of Technology (MIT) in the US and Cardiff University in the UK showed that groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.

It may seem that prejudice is a human-specific phenomenon that requires human cognition to form an opinion of, or to stereotype, a certain person or group.

Though some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans, this new work demonstrates the possibility of AI evolving prejudicial groups on their own.

The findings, published in the journal Scientific Reports, are based on computer simulations of how similarly prejudiced individuals, or virtual agents, can form a group and interact with each other.

In a game of give and take, each individual makes a decision as to whether they donate to somebody inside of their own group or in a different group, based on an individual's reputation as well as their own donating strategy, which includes their levels of prejudice towards outsiders.

As the game unfolds and a supercomputer racks up thousands of simulations, each individual begins to learn new strategies by copying others either within their own group or the entire population.

"By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it," said Roger Whitaker from Cardiff University.

"Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivised in virtual populations, to the detriment of wider connectivity with others," Whitaker said.

"Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse," he said.

The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.

"It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population," Whitaker said. "Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them," he said.

"Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource," he added.

Next Story
Share it