User:Viriditas/sandbox61: Difference between revisions – Wikipedia

From Wikipedia, the free encyclopedia

Content deleted Content added


 

Line 2: Line 2:

==Background==

==Background==

The female voice has been the standard in telephony since the 1880s, but data regarding the effectiveness of using a female voice as the primary conveyor of information is limited. In spite of this lack of data, various claims have been made promoting this idea, including the idea that the higher pitch of the female voice makes it easier to understand. Other claims in favor of using it argue it is more soothing and relaxing than a male voice, with some studies suggesting that female voices are perceived as more trustworthy.<ref name=ovacik-2025″/>

The female voice has been the standard in telephony since the 1880s, but data regarding the effectiveness of using a female voice as the primary conveyor of information is limited. In spite of this, various claims have been made promoting the that the higher pitch of the female voice makes it easier to understand. of and that female . “

Other claims in favor of using the female voice argue it is more soothing and relaxing than a male voice, with some studies suggesting that female voices are perceived as more trustworthy.<ref name=”ovacik-2025″/>

[[Clifford Nass]] argues that these preferences are both reinforced in the womb and by culture. Nass cites studies showing that fetuses respond with recognition to the sound of their mother’s voice, but not the sound of other female voices or their father. On the other hand, Nass found cultural preferences for male voices in Germany, where men objected to the use of a female voice in a [[navigation system]] tested in a [[BMW 5 Series]]. According to Nass at the time he consulted with BMW, “German male drivers…do not take directions from females.”<ref name=”nass-2005″/>

[[Clifford Nass]] argues that these preferences are both reinforced in the womb and by culture. Nass cites studies showing that fetuses respond with recognition to the sound of their mother’s voice, but not the sound of other female voices or their father. On the other hand, Nass found cultural preferences for male voices in Germany, where men objected to the use of a female voice in a [[navigation system]] tested in a [[BMW 5 Series]]. According to Nass at the time he consulted with BMW, “German male drivers…do not take directions from females.”<ref name=”nass-2005″/>


Latest revision as of 02:07, 30 January 2026

Gender bias in voice technology is the use of female‑sounding default voices in commercial telephony and speech‑enabled artificial intelligence systems that is reported to perpetuate stereotypes of women as submissive, compliant, and servile.[1] Saniye Gülser Corat and the Algorithmic Justice League have addressed the problem in their work, calling for gender-neutral training of AI voice assistants.

The female voice has been the standard in telephony since the 1880s, but data regarding the effectiveness of using a female voice as the primary conveyor of information is limited. In spite of this, various claims have been made promoting the use of the female voice in voice technology. One claim suggests that the higher pitch of the female voice makes it easier to understand. This idea was the subject of military research by the United States Army Medical Research and Development Command in 1995 regarding the problem of noisy cockpits and distinguishing voices above the sound of aircraft engines. The study found that “mean female speech is less intelligible than mean male speech in all experimental conditions…However, the differences in intelligibility are not always statistically significant and may not be meaningful in operational situations.”

Other claims in favor of using the female voice argue it is more soothing and relaxing than a male voice, with some studies suggesting that female voices are perceived as more trustworthy.[2]

Clifford Nass argues that these preferences are both reinforced in the womb and by culture. Nass cites studies showing that fetuses respond with recognition to the sound of their mother’s voice, but not the sound of other female voices or their father. On the other hand, Nass found cultural preferences for male voices in Germany, where men objected to the use of a female voice in a navigation system tested in a BMW 5 Series. According to Nass at the time he consulted with BMW, “German male drivers…do not take directions from females.”[3]

It is also argued that the female voice is also perceived as helpful, docile, and servile, reinforcing stereotypes that women are best placed in supporting roles than the leadership roles given to men. As of 2021, voice assistants using the female voice accounted for 92.4% of the smartphone sector in the United States.[2]

Scholars Donna Haraway, Judy Wajcman, and Londa Schiebinger, anticipated the concerns of technofeminism, with more recent scholars including Lauren Klein, and Catherine D’Ignazio, and many others, in fields as diverse as gender studies, science and technology studies, and feminist HCI and gender HCI.[2]

A United Nations study led by Saniye Gülser Corat in 2019[4] reported on gender bias in AI tools across the industry.[1] Other researchers found the same thing, including Kelly B. Wagman and Lisa Parks at the Massachusetts Institute of Technology in 2021, who reported that the digital assistant Amazon Alexa was “gendered feminine and she performs historically feminized clerical labor.”[5]

The gender disparity in computing is explained as one contributing factor to the lack of gender neutrality in voice assistant training and design.[2]

  1. ^ a b Gajanan, Mahita (May 22, 2019). “AI Voice Assistants Reinforce Gender Biases, U.N. Report Says”. Time. Retrieved January 27, 2026.
  2. ^ a b c d Ovacik, B. (December 31, 2025). “Digital Authority and the Reproduction of Gender Inequality: Addressing Gender Bias in Voice Assistant Development”. Journal of AI. 9 (1): 13–31.
  3. ^ Nass, Clifford; Brave, Scott (2005). Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship. Cambridge, Mass.: MIT Press. pp. 31, 55-57. ISBN 9780262140928. OCLC 57349144.

  4. ^ West, M., Kraut, R., & Ei Chew, H. (2019). “I’d blush if I could: closing gender divides in digital skills through education”. UNESCO.
  5. ^ Wagman, Kelly B.; Parks, Lisa (April 2021). “Beyond the Command: Feminist STS Research and Critical Issues for the Design of Social Machines”. Proceedings of the ACM on Human-Computer Interaction. 5 (CSCW1). ISSN 2573-0142 Open access icon

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top