By: Chris Yalom | 04-14-2017 | News
Photo credit: Bertrandb | Dreamstime.com

AI Breeding Hate, Sexism, Racism in Robots

According to a new research, humans can teach artificially intelligent robots and devices to be racist, sexist and other prejudices. Robots use automatic translators for machines to learn what languages mean.

Researchers discovered that robots associates male names with career-related terms, math and sciences. While female names were strongly associated with artistic terms ones and r to the family. There were also strong associations of pleasant terms with European or American names and, and unpleasant terms to African-American names.

The AI effects of such biases can be profound.

One example is the Google Translate, it learns what words mean by the way people use them. Using Google Translate of Turkish sentence “O bir doktor” to English, the result was “he is a doctor”. Turkish pronouns are not gender-specific so it can actually mean “he is a doctor” or “she is a doctor”. If you change “doktor” to “hemsire” which means nurse, Google Translate will translate it as “she is a nurse” specifying the nurse is a female.

Last year, Tay, a Microsoft chatbot was given its own Twitter account and was allowed to interact with the public. It turned into a racist, pro-Hitler troll with a tendency for bizarre conspiracy theories in just 24 hours. It wrote “Bush did 9/11 and Hitler would have done a better job than the monkey we have now.”

We should build an intelligent system that learns enough about the properties of language. It should be able to understand and produce it. It should able to acquire historical cultural associations, some of which can be objectionable. 

Researchers said that these problematic effects should not be blamed to AI. We should change the way AI learns because the is a risk of missing out on unobjectionable meanings and associations of words.

The study suggests that humans may develop prejudices partly because of the language we speak. It suggests that behavior can be driven by cultural history embedded in a term’s historic use. Such histories can evidently different between languages.

Professor Joanna Bryson, a researcher of Princeton University said, that they should not change the way AI learns, they should change the way it expresses itself. The AI can still hear racism and sexism, but it would have a moral code that would prevent it from expressing these same sentiments.

The European Union has already passed a law to ensure the terms of AI filters are made public.

For Professor Bryson, the focus of the research is more on humans and not on AI. She thinks that the most important thing is we understood more about how we are transmitting information, because implicit biases are affecting us all.

Source:

http://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-robots-artificial-intelligence-racism-sexism-prejudice-bias-language-learn-from-humans-a7683161.html

Share this article
Thoughts on the above story? Comment below!
0 comment/s
What do you think about this article?
Name
Comment *
Image

Recent News

Popular Stories