The Biggest Problem in Artificial Intelligence Applications: Discrimination

Machine learning algorithms designed using high-dimensional data, various analyzes and artificial intelligence applications offer a learning experience beyond the human mind. However, artificial intelligence trainings used in these algorithms bring with them big problems as they are made with data selected and compiled by humans.

ae437061dfb47c4e51b38671410d70c203a09c33

Although prejudice is a part of our lives, its reflection on artificial intelligence algorithms may cause big problems in the future. Many prejudices, from genders to races, are one of the biggest problems we face from some artificial intelligence applications that have been developed before.

Jason Bloomberg, author of The Agile Architecture Revolution, says that the problem of bias in the applications of artificial intelligence is a great threat to the future. Talking about the training data of artificial intelligence, Bloomberg states how much of a problem bias is in artificial intelligence applications, with the statements that “datasets about humans are less sensitive to data about the physical world, but more sensitive to bias”.

Microsoft’s racist artificial intelligence Tay.AI:

675ffcc8c84d502efa74cc3a481cf750e01ca3fc

Twitter chat bot developed by Microsoft in 2016 Tay.AI interacted with people on its Twitter account named ‘TayandYou’ and shocked the whole world by giving incredible answers. The project, which was terminated exactly 16 hours from the beginning with its racist, sexist and abusive answers, revealed the racism and sexism problems encountered in artificial intelligence at that time.

Apple’s sexist emoji suggestion system:

78b1dced79dcb62497c518d6e2e5e9cdc7567bcd

Last year, when iPhone users typed the word ‘CEO’ on the keyboard, the phone showed us that artificial intelligence could adopt a sexist attitude by suggesting the emoji named ‘business man’. Apple solved this problem with a later update and offered its users a gender option in such suggestion situations.

Google Translate’s sexist translation system:

48380f61616e5fb8339a31753793c2ff84de367a

In earlier versions, ‘she is a nurse’ Google Translate, which translates the sentence ‘he is a doctor’ as ‘he is a doctor’, once again showed us that artificial intelligence can be sexist. With the new feature coming to Google Translate, different translation options will be offered to users, considering gender in translations from Turkish to languages ​​that distinguish between masculine and feminine.

Artificial intelligence algorithms detect the higher number of men in certain occupations in the defined data sets and give a result accordingly. This situation reveals how important the data set to be used in the training of artificial intelligence is.

At this point, what needs to be done is that programmers should detect prejudices that may arise in artificial intelligence and take precautions accordingly. In addition, the data sets used in the training should be carefully selected and should not cause any prejudice.

In addition to solutions, as in the example of Google Translate, training artificial intelligence against bias is also an option. Although many programmers around the world are working on these problems encountered in artificial intelligence, an effective method has not been implemented yet. In the coming period, we will see together whether artificial intelligence can find a solution to racism and sexism, which are big problems at home.

Related Posts

Leave a Reply

Your email address will not be published.