Wednesday , December 8 2021

Why can artificial intelligence be racist and macho?



[ad_1]

November 8, 2018 06:35 PM
|
Last updated on November 8, 2018 18:36 PM

The researcher Perwi Omar Flórez is preparing for a "very real" future, where the streets will be full of surveillance cameras that can recognize our attitudes and gather information about us as we walk through city.

Explain that they will do without our consent, as they are public places and most of us do not usually cover our faces when leaving home.

Our approach will become a password and, when we go into a shop, it will recognize and investigate data as if it were new or repetitive customers, or where we have been before crossing & # 39; r door. From all the information you collect, the treatment that this company gives us depends on it.

Flórez wants to avoid aspects such as our sex or skin color as part of the criteria evaluated by these companies in deciding whether we deserve another special discount or attention. Something that can happen without the same companies noticed.

Artificial intelligence is not perfect: even if it has not been programmed, the software can learn on its own to discriminate.

Flórez works on an algorithm that allows face identification but hides sensitive data such as race and sex. LLAW FLOREZ

This engineer born in Arequipa 34 years ago, has received his doctorate in Computer Science from the University of Utah (United States) and is currently working as a researcher at Capital One's Headquarters.

He is one of the few American Americans who study the ethical aspects of learning machines or automatic learning, a process that is defined as "the ability to predict the future with data from the past using computers ".

Technology has been based on algorithms used to develop the car without a driver or to detect diseases such as skin cancer, among others.

Flórez works on an algorithm that allows surface identification computers but can not solve the person's gender or ethnic origin. Her dream is that, when it comes to that future, companies will include their algorithm in their computer systems to avoid racial or pregnant decisions even without knowing it.

We always say that we can not be objective precisely because we are human. We've tried to trust the machines so they, but they do not seem to be able;

Because they are programmed as human. In fact, we have recently realized that the algorithm itself is an opinion. I can solve a problem with algorithms in different ways and all of them, in some way, incorporate my own vision of the world. In fact, choosing what the right way of judging an algorithm is already observing, an opinion about the algorithm itself.

Let's say that I am predicting the likelihood that someone will commit a crime. Therefore, I can collect pictures of people who have committed crimes, where they live, what their race, age, etc. are. Then I'll use that information to maximize the accuracy of the algorithm and to predict who can commit a crime later or even where the next offense can occur. This prediction can lead to more concentration of police on areas where more people are suddenly African-based because there are more crimes in that area or who are beginning to give the best to Latin because they are very likely that they will not have any documents in sequence.

Therefore, for someone with legal or abusive residency and living in that area but does not commit crimes, it will be twice as difficult to get rid of that stigma and algorithm. Because of the algorithm you are part of a family or classification, so it's much harder for you to leave the family or that classification statistically. In some way, it is influenced negatively by the reality that surrounds you. Basically, to date, we have been correcting our stereotypes as human beings.

The Peruvian researcher dreamed that companies using face recognition programs use their algorithm in the future LLAW FLOREZ

That subjective element is the criteria you have chosen in the programming of the algorithm.

Exactly There are a chain of processes to make an automatic learning algorithm: collect data, select which features are important, select the algorithm itself, then try to see how it works and reduce errors and, finally , we take it to the public to use. We have realized that there are prejudices in each of those processes.

An investigation by ProPública in 2016 found that the judicial system of several US states uses software to determine which defendants were more likely to reoff. ProPublic discovered that algorithms favored white people and penalties, although the form in which the data was collected did not include questions about skin tone and miter; In a way, he invented the machine and it was used as a criterion to appreciate even though I had not planned to do it right?

What happens is that data that already encloses the race and you do not even realize that. For example, in the United States we have the postcode. There are areas where only African or American people live mainly. For example, in southern California, Latino people mainly live. So, if you use the postcode as a feature to feed an automatic learning algorithm, you also codify the ethnic group without realizing that.

Is there any way to avoid this?

Probably, at the end of the day the responsibility falls on the programmable and algorithm human being and how ethical it may be. That is, if I know my algorithm will work with 10% more and I'll use something that may be sensitive to characterizing person, then just take it out and take responsibility for the consequences, perhaps, economic I can do It has my company So, certainly there is an ethical barrier between deciding what is happening and what is not go to the algorithm and often fall on the programmer.

It is assumed that the algorithms are only processing large volumes of information and saving time. Is there no way to make it incredible?

An incredible number Because they always represent a realistic reality, that is, it's good to have some sort of mistake. However, there is currently very interesting research where you punish the presence of sensitive data. So, the human person is basically choosing which data may be sensitive and the algorithm prevents it from using it or if it does so in a way that & # 39; n show correlation. However, honestly, everything accounts for everything: whether it is 0 or 1 or worth in the middle, it does not have common sense. Although there is a lot of interesting work that allows us to try to avoid prejudices, there is an ethical part that always comes to the man.

Is there an area that you are not an expert does not believe that artificial intelligence should not be left?

I think, at the moment, that we should be ready to use the computer to assist rather than automate. The computer should tell you: here are the ones that you should first process in a judicial system. However, I should also be able to tell you why. This is called an interpretation or transparency and machines should be able to inform the logic that will lead them to make such a decision.

Computers must decide by patterns, but are not standard stereotypes? Are they not useful for the system to find patterns?

If you, for example, want to reduce the error, it's good to use numerical prejudices as it gives you a more accurate algorithm. However, the developer must realize that there is an ethical component to do this. There are currently regulations that you exclude from using some features for things such as credit analysis or even the use of security videos, but they are very rare. Suddenly, what we need is that. Know that the reality is unfair and it is full of prejudices.

The interesting thing is, despite that, that some algorithms enable us to try to reduce this level of prejudice. That is, I can use the skin tone, but not more important than having the same relevance to all ethnic groups. So, answering your question, yes, one may think that using this will be more accurate results and many times like this. There is, again, that ethical element: I want to sacrifice a certain level of accuracy in favor of not giving the user a bad experience or using any kind of prejudice.

The technology to have a car without a driver uses learning machines GETTY ORDER

Experts from Amazon realized that a computer tool that they had designed to select staff was differentiated curricula that included the word "woman" and favorable terms commonly used by men. It's pretty surprising, because to avoid prejudice, you would have to guess what terms men use more often than girls in curricula.

Even for the human being it's hard to realize.

But at the same time, we try not to make gender differences and say words or clothes are not masculine or feminine, but we can all use them. It seems that the learning of machines goes to the other direction, as you have to admit the differences between men and women and to study.

Algorithms collect what is actually the case and reality is, yes, men use a few words that women may not do. And the truth is that people are sometimes better connected to those words because they are also men who are evaluating. So, to say otherwise it may go against the data. This problem is avoided by collecting the same number of men and women curricula. Then the algorithm will devote the same weight to both or the sexually used words. If you have only the 100 curricula on the board, there may be only two for women and 98 for men. Then you will create prejudice because you only model what is happening in viruses and men for this work.

Therefore, those who are worried about politics are not accurate because you have to interfere with the differences and frequency;

You have touched a great point, empathy. The stereotype that one of the engineers has is that they are very analytical and maybe even a little society. It happens that we begin to need things in the engineers that we did not consider as relevant or that it seemed good to us that this was: empathy, ethics and mild; We need to develop these issues because we make so many decisions during the process of implementing an algorithm and there is often an ethical component. If you are not even aware of that, you will not notice it.

Flórez says, in the near future, our face will be our password GETTY ORDER

Do you notice the differences between an algorithm designed by a person and one designed by 20?

In theory, prejudices in algorithm made by more people should be reduced. The problem is that many times, that group includes people who are very similar to each other. Perhaps they are all men or all in Asians. Perhaps, it's good to get women to realize things that the group generally do not realize. That's why diversity is so important today.

Can we say that an algorithm reflects its author's prejudices?

Yes

And that algorithms have prejudices exactly because of the small variety that exists among those who make algorithms?

Not only for that, but it's an important part. I will say that it is also partly due to the data itself, which reflects reality. Over the last 50 years we have strived to create algorithms that reflect reality. Now we have realized that many times that reflect reality also reinforce people's stereotypes.

Do you think there is enough awareness in the sector that algorithms can be harmed or it is something that is not given a big deal?

At a practical level, the importance of it should not be given. At the level of research, many companies are beginning to investigate this issue seriously by creating groups of the FAT name: Fairness, Accountability and Transparency (Justice, Accountability and Transparency).

[ad_2]
Source link