Hitting books: The bias behind the failure to understand the accents of AI assistants

The age of being able to talk on our computers as we do with other people has finally come upon us, but voice-activated assistants like Siri, Alexa and Google Home have not proven to be as revolutionary or inclusive – as we had hoped they would be. Requests to their digital assistants are being ignored in a roundabout way, after users make commendable efforts to interpret commands correctly for users with heavy or less common accents, such as Caribbean or Cockney, regardless of whether these measures have adopted your pronunciation in Houston or Hamburg. For his essay “Siri Disciplines” Your computer is on fire From MIT Press, Professor Dr. Toussaint University Dr. Halsion M. Lawrence, Highlights some of the more obvious flaws in this nascent technology, how this preventable failure effectively eliminated the issue from a number of potential users and Western biases as follows.

MIT Press

“Quoted fromYour computer is turned on“Copyright © 2021 Edited by Thomas S. Mullani, Benjamin Peters, Mar Hicks and Poet Philip. Used with permission from the publisher of MIT Press.

Voice technology is regularly described as revolutionary. In addition to the ability to recognize and replicate the technology of human speech and provide a free environment for users, revolutionary claims, especially by technologists, stem from a variety of trends: the growing number of people using these technologies, the growing sales of personal assistants like Amazon And an expanded number of voice-based home applications. If you are a regular user (or designer) of voice technology, the aforementioned claim may resonate with you, as it is possible that it has made your life easier. However, for speakers with a nonchalant accent (for example, African-American native language or cocaine), virtual assistants like Siri and Alex are unresponsive and frustrating – there are many YouTube videos that show these cases and even make fun of them. For me, the speaker of Caribbean English, there is “silence” when I speak to Siri; This means a lot of services, products and even information that I am not able to access using voice commands. And I have other ways to access these services, products and information, but what is the experience of spoken speakers for which speech is the primary or single way of communication? This so-called “revolution” has left them behind. In fact, Mar Hicks pushes us to consider that any technology that reinforces or reconsiders bias is, in fact, not revolutionary, but oppressive. Voice technologies do nothing to change existing “social bias and classification”, but rather make them stronger, meaning that these technologies are by no means revolutionary to anyone.

One could argue that these technologies are new, and more accents will be supported over time. While this may be true, current trends are not compelling. Here are some questions to consider: First, why were pronunciations created for Standard English primarily in Western cultures (such as American, Canadian, and British English)? Second, for non-Western cultures for which non-standard accent support has been created (such as Singapore and Hinglish), what are these initiatives doing? Third, why was there no disrespectful pronunciation support for English minority speakers? By the end, how much adjustment – and at what cost – must English standard and foreign-accented speakers be involved with existing voice technologies?

Olamada Iquano said in his slave’s biography, “I often picked up a book and talked to it and then listened, when alone, hoping it would answer me; And I was very worried when I saw that it was silent. Equan’s experience with a book’s traditional thematic interface reflects the silence that non-standard English and foreign speakers often encounter when they try to communicate with speech technology such as Apple’s Siri, Amazon’s Alexa or Google Home. In keeping with the promise of natural language for speakers, these technologies encourage their users not to change their language patterns in any way for successful conversation. If you possess a foreign accent or speak a dialect, speech technologies somehow practice “othering” that is biased and disciplined, demanding a form of postclonal assimilation into standard accents that “stuns” the speaker’s socioeconomic reality.

Since these technologies were not originally designed to process non-standard and foreign-accented speech, speakers often have to change their accents in their speech to reduce recognition errors. The result is sustainable alignment and representation of inhumane and foreign-speaking speakers of the English language. This compelling combination is particularly deadly, given that the number of English second-speakers already exceeds the number of native English speakers worldwide. As English is used as a lingua franca worldwide to facilitate commercial, academic, recreational and technical activities, the number of English speakers as a second language (ESL) speaker will continue to grow. An example of this trend is that over time Native English speakers may have less of an influence on the lexical, syntactic, and semantic structure of the English language. We have begun to witness the rise of hybrid language naming of a few numbers, such as Spanish, Cornish, and Hinglish. Despite this trend and obvious effects, foreign-pronounced and non-standard-non-pronounced speech is marginally recognized by speech-mediated devices.

Glusek and Dovidio define a pronunciation as “a type of pronunciation with other linguistic (grammatical, syntactical, morphological, and lexical) of analysis, more or less comparable to standard languages.” Accents identify a particular person, location, or nation where we live (through geographical or regional accents such as South American, Black American, or British Cockney), our socioeconomic location, our race, our policy, our social class, or Our first language. The choice for someone’s pronunciation is well documented. People are more likely to find similar accents on their own than people with different accents. Research has shown that even infants and children show preference for their local pronunciation. This is consistent with the theory that similarities between attitudes and characteristics affect both the communication process and the perceptions that people form about each other.

However, with accents, similarity is not always attractive. Researchers have challenged the principle of similarity-attraction, suggesting that it is rather relevant-specific and that cultural and emotional biases can often lead to a positive perception of non-similar pronunciation. Different accents sometimes have positive stereotypes that convey a positive perception of speech or speakers. Studies further show that even when listeners come in contact with different accents, they tend to favor standard accents, as opposed to standard English accents such as Cockney or Scottish accents.

On the other hand, non-similar accents are not always considered positive and speakers with foreign accents face many challenges. For example, Fleis notes that speaking with a foreign accent involves various potential consequences for second-language (L2) students, including accent recognition, reduced acceptability, reduced comprehension, and negative assessment. The biggest consequence of a foreign pronunciation may be that L2 users often have difficulty understanding themselves due to pronunciation errors. Even spoken native speakers (such as speakers of British English variants, such as mine) find similar difficulties due to differences in pronunciation.

Lambert et al. Creates an early study on language perspectives that show language bias. Since then, research has consistently revealed negative perceptions about speech produced by uninterrupted speakers. As the speech moves closer without worry, the listener’s perception becomes more favorable, and as the speech matches less, the listener’s perceptions become less favorable; Another way, the stronger the foreign pronunciation, the less favorable the speech.

Nonverbal discourse encourages negative stereotypes that are perceived as less intelligent, less loyal, less efficient, weaker speakers of language, and weaker political skills. Prejudice, however, does not stop the realization, as discriminatory behaviors related to pronunciation have been documented in the denial of equal opportunities in housing, employment, court judgments, low-status employment positions, and education for students.

Despite the documented way in which people who speak with pronunciation regularly experience discriminatory behavior, there is still very little mainstream conversation about pronunciation bias and discrimination. In early 2017, I received the following student evaluation from one of my students, who was an unimaginable speaker of English and a future computer programmer:

I want to be very strict here but please don’t be upset – your pronunciation is horrible. As a non-native speaker of English I had a hard time understanding what you were saying. An example that is most sticky is you say the goal but I could hear. Although it was fun at first, it got boring as the semester progressed. I left the idea that you are very proud of your pronunciation, but I think that’s exactly how the movie started. [sic] Acting in movies and changing their accents, when you guess you should try to speak clearly in the American accent so that non-native students can understand you better.

I shouldn’t have been when I was pushed. As a regular guest on a British radio program, David Crystal, a respected and eminent British linguist, said people would write on the show to complain about the pronunciation they did not like. He said, “It was the ultimate nature of language that always struck me. The audience didn’t just say something that they ‘disliked’ something. They used the most emotional words they could think of. They were “terrified,” “shocked,” “overwhelmed,” “frustrated,” “angry,” when they heard something they didn’t like. “” Crystal suggested that reactions are so strong because someone’s pronunciation (or pronunciation) is primarily about identity. It’s about race, it’s about class, it’s about someone’s race, education, and profession.

When I reflected on my student’s “evaluation” of my pronunciation, it struck me that this comment would have been made if it had been made with unchanging features of his or her caste, color, or gender; Yet there is an acceptance about the practice of accent bias when it comes to pronunciation, because part of the accents are seen as a changeable feature of the speaker, arbitrarily changeable. As my student mentioned, after all, Hollywood movie stars do it all the time, but why couldn’t I? Although individuals have demonstrated the ability to accept and switch between pronunciations (which are code switching), it should be a personal choice to do so, because pronunciations are integral to one’s identity. The expectation of a change of pronunciation over others is oppressive; To create a situation where the choice of accent by the speaker is not unfavorable subject to discussion; Violence is imposed on others.

A domain where accent bias is prevalent can be named on seemingly benign devices such as public address systems and banking and airline menu systems; But the lack of variety in accents is particularly interesting in personal assistants like Apple’s Siri, Amazon’s Alexa and Google Home. For example, devices such as PA systems require only the listeners to understand the standard accents of the listener, while the personal accents, not just the comprehension, require the functionality of the standard accents by the users. Therefore, these devices claim that the user meets standard English – a practice that, as a result, makes speakers of non-native and inhuman English.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *