Sun. Nov 28th, 2021


Researchers of artificial intelligence Facing a problem of accountability: Try to make sure you are responsible when making decisions Creator Not a responsible person, but one Algorithm? At the moment, only a handful of individuals and organizations have the automated decision-making power and resources.

Depends on the companies Per The size of a loan sanction or a conviction. But the foundations on which these intelligent systems are built are sensitive to bias. From information bias, from programmers, and from the bottom line of a powerful company, can snowball into unintended consequences. This is the reality that AI researcher Timnit Gebru warned in an RE: WIRED discussion on Tuesday.

“The company was purporting [to assess] Someone is likely to determine the crime again, “said Gebru. “It was horrible for me.”

Gebru was a star engineer at Google who specializes in AI ethics. He co-led a team in charge of guarding against algorithmic racism, sexism and other biases. Gebru co-founded the nonprofit Black in AI, which seeks to improve the inclusion, visibility and health of blacks in his case.

Google forced him out last year. But he did not give up his fight to prevent unintentional damage from machine learning algorithms.

On Tuesday, Gabru Wired spoke with Tom Simonite, a senior author at AI Research about the motivation in AI research, the role of employee protection, and his planned independent institute approach to AI ethics and accountability. Its central point: AI needs to slow down.

“We don’t have time to think about how it should be made, because we’re always putting out the fire,” he said.

As an Ethiopian refugee attending a public school in a Boston suburb, Gebru quickly embraced racial segregation in America. The speeches referred to racism of the past, but did not joke about what he saw, Gabru told Simonite Earlier this year. He has repeatedly found similar errors in his technical career.

Gebru’s professional life began in hardware. But when he saw the diversity barrier, he changed course and began to suspect that most AI research already had the potential to harm marginalized groups.

“Its intercourse has taken me in a different direction, trying to understand and limit the negative social effects of AI,” he said.

For two years, Gebru co-led Google’s Ethical AI team with computer scientist Margaret Mitchell. The team has created tools for Google’s product team to protect against AI accidents. Over time, though, Gabru and Mitchell realized that their meetings and email threads were being dropped.

In June 2020, the GPT-3 language model was released, demonstrating the ability to create consistent prose from time to time. But Gebru’s team is worried about the tension around.

“Let’s model bigger, and bigger, and bigger languages,” Gebru said, recalling popular sentiment. “We had to be like, ‘Let’s just pause and calm down for a second so we can think about the advantages and disadvantages and the alternative ways to do it.’

His team “On the Dangers of Stochastic Parts: What Language Models Can Be Too Big?” Helped to write a research paper on the ethical implications of language models

Others at Google were not happy. Gebru was asked to withdraw the paper or delete Google employees’ names. He responded with a request for transparency: WHO Why did you request to take such strict action? Neither side moved. Gebru learned from a direct report that he had “resigned.”



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *