Mis understanding your native language: Regional accent impedes processing of information status Psychonomic Bulletin & Review

by on February 6, 2024

Addressing Equity in Natural Language Processing of English Dialects

regional accents present challenges for natural language processing.

As Ziems relates, “Many of these patterns were observed by field linguists operating in an oral context with native speakers, and then transcribed.” With this empirical data and the subsequent language rules, Ziems could build a framework for language transformation. Looking at parts of speech and grammatical rules for these dialects enabled Ziems to take a SAE sentence like “She doesn’t have a camera” and break it down into its discrete parts. “We might identify that there’s a negation in there — ‘not’ — and that the verb ‘do’ is connected to that negation.” By analyzing parts of speech in this way, as opposed to just vocabulary, Ziems believes he and the research team have built a robust and comprehensive framework to achieve dialect invariance — constant performance over dialect shifts. At this point, bias in AI and natural language processing (NLP) is such a well-documented and frequent issue in the news that when researchers and journalists point out yet another example of prejudice in language models, readers can hardly be surprised.

Artificial Intelligence Software Market Forecasts Omdia – Informa PLC

Artificial Intelligence Software Market Forecasts Omdia.

Posted: Sat, 09 Mar 2024 09:08:02 GMT [source]

On the other hand, several studies treat regional accents as a type of phonetic variation similar to speaker variation within a regional accent. For example, Le, Best, Tyler, and Kroos (2007) used regional variants to ask how much phonetic detail is represented in the mental lexicon, comparing psycholinguistic models assuming abstract phonological representations of words (Cohort model − Marslen-Wilson, 1987; TRACE − McClelland & Elman, 1986) against models assuming storage of phonetic details (episodic theory − Goldinger, 1998; exemplar theory − Johnson, 1997). They tested spoken-word recognition of stimuli in either the participants’ native dialect or in one of two unfamiliar non-native dialects, one of which was phonetically more similar to the native accent than the other. Based on their finding of higher accuracy and earlier recognition in the phonetically similar unfamiliar dialect, Le et al. argued that mental representations must contain both abstract representations and fine phonetic detail. Crucially, this and other studies assume that dialect differences are a kind of phonetic variant that listeners map to their existing representations or add to their existing set of exemplars (Best, Tyler, Gooding, Orlando, & Quann, 2009; Kraljic, Brennan, & Samuel, 2008, b; Nycz, 2013). Thus, they suggest that different dialects share the same mental representations, i.e. that “tomahto” or “tomayto” are underlyingly the same.

Combined predictive effects of sentential and visual constraints in early audiovisual speech processing

In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent matched than for mismatches, suggesting a native-like competence in these offline ratings. Advances in artificial intelligence and computer graphics digital technologies regional accents present challenges for natural language processing. have contributed to a relative increase in realism in virtual characters. Preserving virtual characters’ communicative realism, in particular, joined the ranks of the improvements in natural language technology, and animation algorithms. We model the effects of an English-speaking digital character with different accents on human interactants (i.e., users). Our cultural influence model proposes that paralinguistic realism, in the form of accented speech, is effective in promoting culturally congruent cognition only when it is self-relevant to users.

regional accents present challenges for natural language processing.

These findings suggest that Canadian English does not use the same prosodic marking of information status as British English. Canadian speakers, while of course native speakers of English, are in that sense non-native speakers of the British variety. Yet, these and other studies on the processing of accented speech typically concentrate on the divergent pronunciation of individual segments or the transfer of syllable structure, and ignore higher levels of language processing, including speech prosody (see overview in Cristia et al., 2012). In the current study, we aimed to find out whether regional accent can impede language processing at the discourse level by investigating Canadian English listeners’ use of prosodic cues to identify new versus previously mentioned referents when processing British-accented English. Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech, we show that regional accent impedes higher levels of language processing, making native listeners’ processing resemble that of second-language listeners.

The cultural influence model: when accented natural language spoken by virtual characters matters

Results from Experiment 1 indicate that when processing British English prosodic cues to information status, contrary to our original hypothesis, native Canadian English speakers resemble non-native speakers confronted with the same stimuli (Chen & Lai, 2011) rather than native British English speakers (Chen et al., 2007). In both experiments, our Canadian participants treated falling accents as a cue to newness and unaccented realizations as a cue to givenness. However, rising accents, which are a clear cue to givenness for native British English speakers, were not a clear cue towards either information status in Experiment 1. In line with this, Canadian listeners showed no effect of information status on the ratings of Canadian-spoken stimuli in Experiment 2.

regional accents present challenges for natural language processing.

For example, a Chinese or Middle Eastern English accent may be perceived as foreign to individuals who do not share the same ethnic cultural background with members of those cultures. However, for individuals who are familiar and affiliate with those cultures (i.e., in-group members who are bicultural), accent not only serves as a motif of shared social identity, it also primes them to adopt culturally appropriate interpretive frames that influence their decision making. Jurafsky says that his field, known as natural language processing (NLP), is now in the midst of a shift from simply trying to understanding the literal meaning of words to digging into the human emotions and the social meanings behind those words. And, by looking at the language of the past, language analysis promises to reveal who we once were. Meanwhile, in fields such as medicine, NLP is being used to help doctors diagnose mental illnesses, like schizophrenia, and to measure how those patients respond to treatment. Scholars of natural language processing are exploring the human emotions and social meanings behind the words we use.

By contrast, the Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native British listeners do.In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as the same instructions spoken by a Canadian imitating the British English prosody. Whether we call a tomato “tomahto” or “tomayto” has come to represent an unimportant or minor difference – “it’s all the same to me,” as the saying goes. However, what importance such socio-linguistic differences actually have for language processing, and how to integrate their potential effects in psycholinguistic models, is far from clear. On the one hand, recent research shows that regional accents different from the listeners’, such as Indian English for Canadian listeners, impede word processing (e.g., Floccia, Butler, Goslin, & Ellis, 2009; Hawthorne, Järvikivi, & Tucker, 2018).

regional accents present challenges for natural language processing.

Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues. However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent. Native listeners use prosodic cues to information status to disambiguate between two possible referents, a new and a previously mentioned one, before they have heard the complete word.

Share

“We’ve seen performance drops in question-answering for Singapore English, for example, of up to 19 percent,” says Ziems. Many of these variants are also considered “low resource,” meaning there’s a paucity of natural, real-world examples of people using these languages. 3 illustrates the difference in looks to the competitor between all pairs of conditions (one pair per panel). Gray shading marks 99% confidence intervals and dotted vertical lines indicate the time points that are significantly different between the conditions https://chat.openai.com/ (i.e., where the confidence intervals do not overlap with the line indicating a difference of zero). Natural language processing reveals huge differences in how Texas history textbooks treat men, women, and people of… However, less well-publicized are the talented minds working to solve these issues of bias, like Caleb Ziems, a third-year PhD student mentored by Diyi Yang, assistant professor in the Computer Science Department at Stanford and an affiliate of Stanford’s Institute for Human-Centered AI (HAI).

These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and representation to include both prosody and regional variation. As a measure of interference, we analyzed the proportion of looks to the competitor as a time series between 200 ms and 700 ms after the onset of the target word as our dependent variable (Fig. 2). We used generalized additive mixed-effects modelling (GAMM) in R (Porretta, Kyröläinen, van Rij, & Järvikivi, 2018; R Core Team, 2018; Wood, 2016) to model the time series data (727 trials total) (see Online Supplementary Materials for details on preprocessing and analysis). Current language technologies, which are typically trained on Standard American English (SAE), are fraught with performance issues when handling other English variants.

Linguist and computer scientist Dan Jurafsky explores how AI is expanding from capturing individual words and sentences to modeling the social nature of language. Nineteen native speakers of Canadian English participated in the study (13 female, mean age 19.11 years). All rights are reserved, including those for text and data mining, AI training, and similar technologies. Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.

  • However, less well-publicized are the talented minds working to solve these issues of bias, like Caleb Ziems, a third-year PhD student mentored by Diyi Yang, assistant professor in the Computer Science Department at Stanford and an affiliate of Stanford’s Institute for Human-Centered AI (HAI).
  • Meanwhile, in fields such as medicine, NLP is being used to help doctors diagnose mental illnesses, like schizophrenia, and to measure how those patients respond to treatment.
  • However, rising accents, which are a clear cue to givenness for native British English speakers, were not a clear cue towards either information status in Experiment 1.
  • By contrast, the Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native British listeners do.
  • In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a screen while their eye movements were tracked.

In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a screen while their eye movements were tracked. You can foun additiona information about ai customer service and artificial intelligence and NLP. By contrast, the Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native British listeners do. Here, we investigate the extent to which Canadian listeners’ reactions to British English prosodic cues to information status resemble those of British native and Dutch second-language speakers of English. A second experiment more explicitly addresses the issue of shared versus different representations for different dialects by testing if the same prosodic cues are rated as equally contextually appropriate when produced by a Canadian speaker. Additionally, accentuation of the target word was manipulated in the second instruction, so that the target word carried a falling accent, a rising accent, or was unaccented (see Fig. 1 and Online Supplementary Materials; the first instruction always had the same intonational contour). Information status (given/new) and accentuation (falling/rising/unaccented) of the target word in the second instruction were crossed, yielding six experimental conditions.

The research of Ziems and his colleagues led to the development of Multi-VALUE, a suite of resources that aim to address equity challenges in NLP, specifically around the observed performance drops for different English dialects. The result could mean AI tools from voice assistants to translation and transcription services that are more fair and accurate for a wider range of speakers. We Chat PG used the visual and auditory stimuli from Chen et al. (2007) and Chen and Lai (2011), who adopted the design and items from Dahan et al. (2002). The target items were made up of 18 cohort target-competitor pairs that had similar frequencies and shared an initial phoneme string of various lengths (e.g., candle vs. candy, sheep vs. shield; see Online Supplementary Materials for details).

Find more like this: Artificial intelligence (AI)

Comments are closed.