15.ai: Difference between revisions – Wikipedia

Line 150: Line 150:

From 2020, 15.ai has generated audio at [[44.1 kHz]] [[sampling rate]]—higher than the 16 kHz standard used by most deep learning text-to-speech systems of that period. This higher fidelity created more detailed audio [[spectrograms]] and greater audio resolution, though it also made any synthesis imperfections more noticeable.{{sfn|Feng|2020}} The system processed speech using customized deep neural networks combined with specialized audio synthesis algorithms.{{sfnm|Chandraseta|2021}} While the underlying technology could produce 10 seconds of audio in less than 10 seconds of processing time (i.e. ”faster-than-real-time”), the actual user experience often involved longer waits as the servers managed thousands of simultaneous requests, sometimes taking more than a minute to deliver results.{{sfnm|Chandraseta|2021|Lamorlette|2021}}

From 2020, 15.ai has generated audio at [[44.1 kHz]] [[sampling rate]]—higher than the 16 kHz standard used by most deep learning text-to-speech systems of that period. This higher fidelity created more detailed audio [[spectrograms]] and greater audio resolution, though it also made any synthesis imperfections more noticeable.{{sfn|Feng|2020}} The system processed speech using customized deep neural networks combined with specialized audio synthesis algorithms.{{sfnm|Chandraseta|2021}} While the underlying technology could produce 10 seconds of audio in less than 10 seconds of processing time (i.e. ”faster-than-real-time”), the actual user experience often involved longer waits as the servers managed thousands of simultaneous requests, sometimes taking more than a minute to deliver results.{{sfnm|Chandraseta|2021|Lamorlette|2021}}

Due to its [[nondeterministic]] design, 15.ai produced variations in its speech output. 15.ai introduced the concept of ”’emotional contextualizers,”’ which allowed users to specify the emotional tone of generated speech through guiding phrases.{{sfnm|Chandraseta|2021|Temitope|2024}} The emotional contextualizer functionality utilized DeepMoji, a [[sentiment analysis]] neural network developed at the [[MIT Media Lab]] that processed [[emoji]] embeddings from 1.2 billion Twitter posts to analyze their emotional content.{{sfn|Kurosawa|2021}} If an input into 15.ai contained additional context (specified by a vertical bar), the additional context following the bar would be used as the emotional contextualizer.<ref>{{harvnb|Chandraseta|2021}}: “By adding a ‘|’ after the original sentence and providing an extra sentence, we could control what emotion the original sentence will be spoken with. In other words, ‘text_1|text_2’ will produce a voice line of text_1 with the emotion of text_2.”</ref> For example, if the input was <code>Today is a great day!|I’m very sad.</code>, the selected character would speak the sentence “Today is a great day!” in the emotion one would expect from someone saying the sentence “I’m very sad.”<ref>{{harvnb|Chandraseta|2021}}: “because it could force the bot into generating previously unknown data, such as saying ‘Today is a great day’ with a sad or angry emotion”</ref>

Due to its [[nondeterministic]] design, 15.ai produced variations in its speech output. 15.ai introduced the concept of ”’emotional contextualizers,”’ which allowed users to specify the emotional tone of generated speech through guiding phrases.{{sfnm|Chandraseta|2021|Temitope|2024}} The emotional contextualizer functionality utilized DeepMoji, a [[sentiment analysis]] neural network developed at the [[MIT Media Lab]] that processed [[emoji]] embeddings from 1.2 billion Twitter posts to analyze their emotional content.{{sfn|Kurosawa|2021}} If an input into 15.ai contained additional context (specified by a vertical bar), the additional context following the bar would be used as the emotional contextualizer.<ref>{{harvnb|Chandraseta|2021}}: “By adding a ‘|’ after the original sentence and providing an extra sentence, we could control what emotion the original sentence will be spoken with. In other words, ‘text_1|text_2’ will produce a voice line of text_1 with the emotion of text_2.”</ref> For example, if the input was <code>Today is a great day!|I’m very sad.</code>, the selected character would speak the sentence “Today is a great day!” in the emotion one would expect from someone saying the sentence “I’m very sad.”<ref>{{harvnb|Chandraseta|2021}}: “because it could force the bot into generating previously unknown data, such as saying ‘Today is a great day’ with a sad or angry emotion”</ref>

[[File:TalkNet.png|thumb|upright=1.2|right|An example of a conversion of the text “[[Daisy Bell#Computing and technology|daisy bell]]” into speech, starting from [[English orthography]]. English words are parsed as a string of ARPABET phonemes, then is passed through a pitch predictor and a [[Mel-frequency cepstrum|mel-spectrogram]] generator to generate audio.]]

[[File:TalkNet.png|thumb|upright=1.2|right|An example of a conversion of the text “[[Daisy Bell#Computing and technology|daisy bell]]” into speech, starting from [[English orthography]]. English words are parsed as a string of ARPABET phonemes, then is passed through a pitch predictor and a [[Mel-frequency cepstrum|mel-spectrogram]] generator to generate audio.]]

Line 195: Line 195:

== Legacy ==

== Legacy ==

[[File:CNN_Home_Alone_2_Heavy_Weapons_Guy_15ai_January_2021.png|thumb|upright=1.3|right|A January 2021 [[CNN]] broadcast showing a viral video that used 15.ai to replace [[Donald Trump]]’s ”[[Home Alone 2]]” cameo with the Heavy Weapons Guy from ”[[Team Fortress 2]]”]]

[[File:CNN_Home_Alone_2_Heavy_Weapons_Guy_15ai_January_2021.png|thumb|upright=1.3|right|A January 2021 [[CNN]] broadcast showing a viral video that used 15.ai to replace [[Donald Trump]]’s ”[[Home Alone 2]]” cameo with the Heavy Weapons Guy from ”[[Team Fortress 2]]”]]

15.ai was an early pioneer of audio deepfakes, and its popularity led to the emergence of AI speech synthesis-based memes during the initial stages of the [[AI boom]] in 2020. 15.ai is credited as the first platform to popularize AI voice cloning in [[Internet meme]]s and content creation,<ref name=”memes”/> particularly through its ability to generate convincing character voices in real-time without requiring extensive technical expertise.{{sfnm|Ruppert|2021|Morton|2021}} The platform’s impact was especially notable in fan communities, including the [[My Little Pony: Friendship Is Magic fandom|”My Little Pony: Friendship Is Magic”]], ”[[Portal (series)|Portal]]”, ”[[Team Fortress 2]]”, and ”[[SpongeBob SquarePants]]” fandoms, where it enabled the creation of viral content that garnered millions of views across social media platforms like [[Twitter]] and [[YouTube]].{{sfnm|遊戲|2021|Kurosawa|2021|Morton|2021|Temitope|2024}} ”Team Fortress 2” content creators also used the platform to produce both short-form memes and complex narrative animations using [[Source Filmmaker]]. Fan creations included skits and fan animations,{{sfnm|Zwiezen|2021|Ruppert|2021|Kurosawa|2021|Abisola|2025}} crossover content,{{sfn|Ruppert|2021}} recreations of viral videos,{{sfnm|Zwiezen|2021|Morton|2021}} adaptations of [[fan fiction]],{{sfn|Abisola|2025}} music videos, and musical compositions.{{sfn|Abisola|2025}} Some fan creations gained mainstream attention: a viral video that replaced [[Donald Trump]]’s cameo in ”[[Home Alone 2: Lost in New York]]” with the [[Heavy Weapons Guy]]’s AI-generated voice was featured on a daytime [[CNN]] segment in January 2021.{{sfnm|Clayton|2021|CNN|2021}} Some users integrated 15.ai’s voice synthesis with [[Voice user interface|voice command software]] to create personal assistants.{{sfn|Furushima|2021}}

15.ai was an early pioneer of audio deepfakes, and its popularity led to the emergence of AI speech synthesis-based memes during the initial stages of the [[AI boom]] in 2020. 15.ai is credited as the first platform to popularize AI voice cloning in [[Internet meme]]s and content creation,<ref name=”memes”/> particularly through its ability to generate convincing character voices in real-time without requiring extensive technical expertise.{{sfnm|Ruppert|2021|Morton|2021}} The platform’s impact was especially notable in fan communities, including the [[ fandom|”My Little Pony: Friendship Is Magic”]], ”[[Portal (series)|Portal]]”, ”[[Team Fortress 2]]”, and ”[[SpongeBob SquarePants]]” fandoms, where it enabled the creation of viral content that garnered millions of views social media.{{sfnm|遊戲|2021|Kurosawa|2021|Morton|2021|Temitope|2024}} ”Team Fortress 2” content creators also used the platform to produce both short-form memes and complex narrative animations using [[Source Filmmaker]]. Fan creations included skits and fan animations,{{sfnm|Zwiezen|2021|Ruppert|2021|Kurosawa|2021|Abisola|2025}} crossover content,{{sfn|Ruppert|2021}} recreations of viral videos,{{sfnm|Zwiezen|2021|Morton|2021}} adaptations of [[fan fiction]],{{sfn|Abisola|2025}} music videos, and musical compositions.{{sfn|Abisola|2025}} Some fan creations gained mainstream attention: a viral video that replaced [[Donald Trump]]’s cameo in ”[[Home Alone 2: Lost in New York]]” with the [[Heavy Weapons Guy]]’s AI-generated voice was featured on a daytime [[CNN]] segment in January 2021.{{sfnm|Clayton|2021|CNN|2021}} Some users integrated 15.ai’s voice synthesis with [[Voice user interface|voice command software]] to create personal assistants.{{sfn|Furushima|2021}}

[[File:The_Tax_Breaks_15.ai.jpg|thumb|upright=1.3|right|”The Tax Breaks” is a 17-minute fan-made episode of ”Friendship Is Magic” produced using character voices from 15.ai.{{sfn|Abisola|2025}}]]

[[File:The_Tax_Breaks_15.ai.jpg|thumb|upright=1.3|right|”The Tax Breaks” is a 17-minute fan-made episode of ”Friendship Is Magic” produced using character voices from 15.ai.{{sfn|Abisola|2025}}]]

Its influence since its launch has been publicly recognized,{{sfn|Wright|2023}} with commercial alternatives like [[ElevenLabs]]{{efn|which uses “11.ai” as a legal byname for its web domain{{sfn|ElevenLabs|2024b|ref=ElevenLabs-2024b}}}} and [[Speechify]] emerging to fill the void after its initial shutdown.{{sfnm|Staniszewski|2024|Play.ht|2024|2ref=Play.ht-2024|Weitzman|2023}} Contemporary generative voice AI companies have acknowledged 15.ai’s pioneering role. [[Y Combinator]] startup PlayHT called the debut of 15.ai “a breakthrough in the field of text-to-speech (TTS) and speech synthesis”.{{sfn|Play.ht|2024|ref=Play.ht-2024}} [[Cliff Weitzman]], the founder and CEO of [[Speechify]], credited 15.ai for “making AI voice cloning popular for content creation by being the first […] to feature popular existing characters from fandoms”.{{sfn|Weitzman|2023}} Mati Staniszewski, co-founder and CEO of [[ElevenLabs]], wrote that 15.ai was transformative in the field of [[deep learning speech synthesis|AI text-to-speech]].{{sfn|Staniszewski|2024}}

Its influence since its launch has been publicly recognized,{{sfn|Wright|2023}} with commercial alternatives like [[ElevenLabs]]{{efn|which uses “11.ai” as a legal byname for its web domain{{sfn|ElevenLabs|2024b|ref=ElevenLabs-2024b}}}} and [[Speechify]] emerging to fill the void after its initial shutdown.{{sfnm|Staniszewski|2024|Play.ht|2024|2ref=Play.ht-2024|Weitzman|2023}} Contemporary generative voice AI companies have acknowledged 15.ai’s pioneering role. [[Y Combinator]] startup PlayHT called the debut of 15.ai “a breakthrough in the field of text-to-speech (TTS) and speech synthesis”.{{sfn|Play.ht|2024|ref=Play.ht-2024}} [[Cliff Weitzman]], the founder and CEO of Speechify, credited 15.ai for “making AI voice cloning popular for content creation by being the first […] to feature popular existing characters from fandoms”.{{sfn|Weitzman|2023}} Mati Staniszewski, co-founder and CEO of ElevenLabs, wrote that 15.ai was transformative in the field of AI text-to-speech.{{sfn|Staniszewski|2024}}

15.ai established technical precedents that influenced subsequent developments in AI voice synthesis. Its integration of [[DeepMoji]] for emotional analysis demonstrated the viability of incorporating sentiment-aware speech generation,{{sfn|Osman|2022}} while its support for ARPABET phonetic transcriptions set a standard for precise pronunciation control in public-facing voice synthesis tools.{{sfn|Temitope|2024}} The platform’s unified multi-speaker model, which enabled simultaneous training of diverse character voices, allowed the system to recognize emotional patterns across different voices even when certain emotions were absent from individual character training sets; for example, if one character had examples of joyful speech but no angry examples, while another had angry but no joyful samples, the system could learn to generate both emotions for both characters by understanding the common patterns of how emotions affect speech.{{sfnm|Kurosawa|2021|Temitope|2024}}

15.ai established technical precedents that influenced subsequent developments in AI voice synthesis. Its integration of DeepMoji for emotional analysis demonstrated the viability of incorporating sentiment-aware speech generation,{{sfn|Osman|2022}} while its support for ARPABET phonetic transcriptions set a standard for precise pronunciation control in public-facing voice synthesis tools.{{sfn|Temitope|2024}} The platform’s unified multi-speaker model, which enabled simultaneous training of diverse character voices, allowed the system to recognize emotional patterns across different voices even when certain emotions were absent from individual character training sets; for example, if one character had examples of joyful speech but no angry examples, while another had angry but no joyful samples, the system could learn to generate both emotions for both characters by understanding the common patterns of how emotions affect speech.{{sfnm|Kurosawa|2021|Temitope|2024}}

15.ai also contributed to the reduction of training data requirements for speech synthesis. Earlier systems like [[Google AI]]‘s Tacotron and [[Microsoft Research]]’s FastSpeech required tens of hours of audio to produce acceptable results and failed to generate intelligible speech with less than 24 minutes of training data.<ref name=”Google”/>{{sfn|Ren|Ruan|Tan|Qin|2019}} In contrast, 15.ai demonstrated the ability to generate speech with substantially less training data&mdash;specifically, the name “15.ai” refers to the creator’s claim that a voice could be cloned with just 15 seconds of data.{{sfn|Temitope|2024}} This approach to data efficiency influenced subsequent developments in AI voice synthesis technology, as the 15-second benchmark became a reference point for subsequent voice synthesis systems. The original claim that only 15 seconds of data is required to clone a human’s voice was corroborated by [[OpenAI]] in 2024.{{sfnm|OpenAI|2024|1ref=OpenAI-2024|Temitope|2024}}

15.ai also contributed to the reduction of training data requirements for speech synthesis. Earlier systems like Google AI’s Tacotron and [[Microsoft Research]]’s FastSpeech required tens of hours of audio to produce acceptable results and failed to generate intelligible speech with less than 24 minutes of training data.<ref name=”Google”/>{{sfn|Ren|Ruan|Tan|Qin|2019}} In contrast, 15.ai demonstrated the ability to generate speech with substantially less training data&mdash;specifically, the name “15.ai” refers to the creator’s claim that a voice could be cloned with just 15 seconds of data.{{sfn|Temitope|2024}} This approach to data efficiency influenced subsequent developments in AI voice synthesis technology, as the 15-second benchmark became a reference point for subsequent voice synthesis systems. The original claim that only 15 seconds of data is required to clone a human’s voice was corroborated by [[OpenAI]] in 2024.{{sfnm|OpenAI|2024|1ref=OpenAI-2024|Temitope|2024}}

== See also ==

== See also ==

*[[AI boom]]

*[[AI boom]]

*[[Brony fandom]]

*[[Character.ai]]

*[[Character.ai]]

*[[Deepfake]]

*[[Deepfake]]

*[[Ethics of artificial intelligence]]

*[[Ethics of artificial intelligence]]

*[[WaveNet]]

*[[WaveNet]]

*[[My Little Pony: Friendship Is Magic fandom|”My Little Pony: Friendship Is Magic” fandom]]

*[[Synthetic media]]

*[[Synthetic media]]

Real-time text-to-speech AI tool

15.ai is a free non-commercial web application and research project that uses artificial intelligence to generate text-to-speech voices of fictional characters from popular media. Created by a pseudonymous artificial intelligence researcher known as 15, who began developing the technology as a freshman during their undergraduate research at the Massachusetts Institute of Technology, the application allowed users to make characters from video games, television shows, and movies speak custom text with emotional inflections faster than real-time.[a] The platform was notable for its ability to generate convincing voice output using minimal training data—the name “15.ai” referenced the creator’s claim that a voice could be cloned with just 15 seconds of audio, in contrast to contemporary deep learning speech models which typically required tens of hours of audio data. It was an early example of an application of generative artificial intelligence during the initial stages of the AI boom.

Launched in March 2020, 15.ai gained widespread attention in early 2021 when content utilizing it went viral on social media platforms like YouTube and Twitter, and quickly became popular among Internet fandoms, such as the My Little Pony: Friendship Is Magic, Team Fortress 2, and SpongeBob SquarePants fandoms. The service distinguished itself through its support for emotional context in speech generation through emojis, precise pronunciation control through phonetic transcriptions, and multi-speaker capabilities that allowed a single model to generate diverse character voices. 15.ai is credited as the first platform to popularize AI voice cloning (audio deepfakes) in memes and content creation.[1]

Voice actors and industry professionals debated 15.ai’s merits for fan creativity versus its potential impact on the profession. While many critics praised the application’s accessibility and emotional control, they also criticized technical limitations in areas like prosody options and non-English language support. 15.ai prompted discussions about ethical implications, including concerns about reduction of employment opportunities for voice actors, voice-related fraud, and misuse in explicit content.

In January 2022, Voiceverse generated controversy when it was discovered that the company had generated audio using 15.ai without attribution and sold it as a non-fungible token (NFT) without permission.[2] News publications universally characterized this incident as Voiceverse having “stolen” voice lines from 15.ai.[3] The service was ultimately taken offline in September 2022 due to legal issues surrounding artificial intelligence and copyright. Its shutdown was followed by the emergence of various commercial alternatives in subsequent years, with their founders acknowledging 15.ai’s pioneering influence in the field of deep learning speech synthesis.

On May 18, 2025, 15 launched 15.dev, a sequel to the original service that launched after nearly three years of inactivity.

History

Background

A comparison of the alignments (attentions) between Tacotron and a modified variant of Tacotron

The field of artificial speech synthesis underwent a significant transformation with the introduction of deep learning approaches. In 2016, DeepMind‘s publication of the seminal paper WaveNet: A Generative Model for Raw Audio marked a pivotal shift toward neural network-based speech synthesis, demonstrating unprecedented audio quality through causal convolutional neural networks. Previously, concatenative synthesis—which worked by stitching together pre-recorded segments of human speech—was the predominant method for generating artificial speech, but it often produced robotic-sounding results at the boundaries of sentences. Two years later, this was followed by Google AI‘s Tacotron 2 in 2018, which demonstrated that neural networks could produce highly natural speech synthesis but required substantial training data—typically tens of hours of audio—to achieve acceptable quality. When trained on smaller datasets, such as 2 hours of speech, the output quality degraded while still being able to maintain intelligible speech, and with just 24 minutes of training data, Tacotron 2 failed to produce intelligible speech.[5] The same year saw the emergence of HiFi-GAN, a generative adversarial network (GAN)-based vocoder that improved the efficiency of waveform generation while producing high-fidelity speech, followed by Glow-TTS, which introduced a flow-based approach that allowed for both fast inference and voice style transfer capabilities. Chinese tech companies also made significant contributions to the field, with Baidu and ByteDance developing proprietary text-to-speech frameworks that further advanced the technology, though specific technical details of their implementations remained largely undisclosed.

2016–2020: Conception and development

[…] The website has multiple purposes. It serves as a proof of concept of a platform that allows anyone to create content, even if they can’t hire someone to voice their projects.

It also demonstrates the progress of my research in a far more engaging manner – by being able to use the actual model, you can discover things about it that even I wasn’t aware of (such as getting characters to make gasping noises or moans by placing commas in between certain phonemes).

It also doesn’t let me get away with picking and choosing the best results and showing off only the ones that work […] Being able to interact with the model with no filter allows the user to judge exactly how good the current work is at face value.

15.ai was conceived in 2016 as a research project in deep learning speech synthesis by a developer known as 15 (at the age of 18) during their freshman year at the Massachusetts Institute of Technology (MIT) as part of its Undergraduate Research Opportunities Program (UROP).[12] The developer was inspired by DeepMind‘s WaveNet paper, with development continuing through their studies as Google AI released Tacotron 2 the following year. By 2019, the developer had demonstrated at MIT their ability to replicate WaveNet and Tacotron 2’s results using 75% less training data than previously required. The name 15 is a reference to the creator’s claim that a voice can be cloned with as little as 15 seconds of data.

The developer had originally planned to pursue a doctorate based on their undergraduate research, but opted to work in the tech industry instead after their startup was accepted into the Y Combinator accelerator in 2019. After their departure in early 2020, the developer returned to their voice synthesis research, implementing it as a web application. According to a post on X from the developer, instead of using conventional voice datasets like LJSpeech that contained simple, monotone recordings, they sought out more challenging voice samples that could demonstrate the model’s ability to handle complex speech patterns and emotional undertones.[tweet 1] The Pony Preservation Project—a fan initiative originating from /mlp/, 4chan‘s My Little Pony board, that had compiled voice clips from My Little Pony: Friendship Is Magic—played a crucial role in the implementation. The project’s contributors had manually trimmed, denoised, transcribed, and emotion-tagged every line from the show. This dataset provided ideal training material for 15.ai’s deep learning model.

2020–2022: Release and operation

An example of a multi-speaker embedding. The neural network maps the predicted timestamps to a masked embedding sequence that encodes speaker information.

15.ai was released in March 2020 with a limited selection of characters, including those from My Little Pony: Friendship Is Magic and Team Fortress 2. The system was designed to function efficiently with limited training data and required only minutes of clean audio per character, in contrast to the 40+ hours typically needed by traditional deep learning models.[15]

Upon its launch, 15.ai was offered as a free and non-commercial service that did not require user registration or user accounts to operate, and required the user to accept the terms of use before proceeding. Users were permitted to create any content with the synthesized voices under two specific conditions: they must properly credit 15.ai by including the website URL in any posts, videos, or projects using the generated audio; and they were prohibited from mixing 15.ai outputs with other text-to-speech outputs in the same work to prevent misrepresentation of the technology’s capabilities.[18]

More voices were added to the website in the following months. A significant technical advancement came in late 2020 with the implementation of a multi-speaker embedding in the deep neural network, enabling simultaneous training of multiple voices rather than requiring individual models for each character voice. This not only allowed rapid expansion from eight to over fifty character voices, but also let the model recognize common emotional patterns across characters, even when certain emotions were missing from some characters’ training data.

By May 2020, the site had served over 4.2 million audio files to users. In early 2021, the application gained popularity after skits, memes, and fan content created using 15.ai went viral on Twitter, TikTok, Reddit, Twitch, Facebook, and YouTube. At its peak, the platform incurred operational costs of US$12,000 per month from AWS infrastructure needed to handle millions of daily voice generations; despite receiving offers from companies to acquire 15.ai and its underlying technology, the website remained independent and was funded out of the personal previous startup earnings of the developer—then aged 23 at the time.

2022: Voiceverse NFT controversy

A satirical meme representing the “right-click, save as” criticism of NFTs. Critics of Voiceverse pointed out the irony of selling ownership rights to AI voices when they themselves had copied 15.ai’s technology without attribution.

On January 14, 2022, a controversy ensued after it was discovered that Voiceverse NFT had taken credit for voice lines generated from 15.ai without permission[3] and sold them as NFTs (non-fungible tokens).[2] This came shortly after 15.ai’s developer had explicitly stated in December 2021 that they had no interest in incorporating NFTs into their work. Log files showed that Voiceverse had generated audio of characters from My Little Pony: Friendship Is Magic using 15.ai, pitched them up to make them sound unrecognizable from the original voices to market their own platform—in violation of 15.ai’s terms of service which explicitly prohibited commercial use and required proper attribution.

Voiceverse initially claimed their platform would allow NFT owners to possess commercial rights to AI-generated voices for content creation, in-game chats, and video calls. When confronted with evidence of their misappropriation, Voiceverse claimed that someone in their marketing team used the voice without properly crediting 15.ai and explained in their Discord server that their marketing team had been in such a rush to create a partnership demo that they used 15.ai without waiting for their own voice technology to be ready.[27] The controversial tweet was deleted thereafter. In response to their apology, 15 tweeted “Go fuck yourself,”[29] which went viral, amassing hundreds of thousands of retweets and likes on Twitter in support of the developer.

I’m partnering with @VoiceverseNFT to explore ways where together we might bring new tools to new creators to make new things, and allow everyone a chance to own & invest in the IP’s they create.
We all have a story to tell.
You can hate.
Or you can create.
What’ll it be?

Following continued backlash and the plagiarism revelation, voice actor Troy Baker (who had partnered with Voiceverse) faced criticism for supporting an NFT project and his confrontational announcement tone. Baker had described Voiceverse’s service as allowing people to “create customized audiobooks, YouTube videos, e-learning lectures, or even podcasts with your favorite voice all without the hassle of additional legal work,” which raised concerns about potentially replacing professional voice actors with AI. Baker subsequently acknowledged that his original announcement tweet ending with “You can hate. Or you can create. What’ll it be?” may have been “antagonistic,” and on January 31, announced he would discontinue his partnership with Voiceverse.

The event raised concerns about NFT projects, which critics observed were frequently associated with intellectual property theft and questionable business practices. The incident was documented in the AI Incident Database (AIID), which catalogued it as “an AI-synthetic audio sold as an NFT on Voiceverse’s platform [that] was acknowledged by the company for having been created by 15.ai […] and reused without proper attribution,” and in the AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) repository, which placed it within the controversial trend of the commercialization of AI-generated voices through NFTs. The controversy was also featured in writer and crypto skeptic Molly White‘s Web3 Is Going Just Great project, which documented how Baker’s partnership announcement and its antagonistic tone exacerbated negative reactions to the NFT initiative. White commented on the vague nature of Voiceverse’s offering, described only as “provid[ing] you an ownership to a unique voice in the Metaverse,” and stated that the revelation of stolen work from 15.ai further damaged Voiceverse’s credibility. Russian educational platform Skillbox listed the incident as an example of fraud in NFTs. Voice actor and YouTuber Yong Yea criticized voice NFTs for its potential impact on the voice acting industry, and stated in a follow-up YouTube video:

“This isn’t just one of those things [Voiceverse] can go ‘Whoopsies!’ on. [They] plagiarized somebody else’s work and used that as a means to falsely market the quality of [their] own products, by using somebody else’s higher quality voice AI to promote [Voiceverse] for [their] own benefit.”[video 1]

In a 2024 class action lawsuit filed against LOVO, Inc., court documents alleged that the founders of LOVO also created Voiceverse, with plaintiffs claiming that Voiceverse had “already been found to have stolen technology from [15.ai]”.[43]

2022–2024: Inactivity

In September 2022, 15.ai was taken offline due to legal issues surrounding artificial intelligence and copyright. In a post on Twitter, 15 suggested a potential future version that would better address copyright concerns from the outset.

2025: Revival

On May 18, 2025, 15 launched 15.dev as the official sequel to 15.ai.[45][46] Fandom news site Equestria Daily reported that the website included “almost every voiced pony in the show” with “a dropdown for various emotions you want to generate.”

Features

Three AI-generated voice line variations from 15.ai showing their waveforms and respective alignment confidence scores

15.ai is non-commercial, has no advertisements, generates no revenue, and operates without requiring user registration or accounts. Users are able to generate speech by inputting text and selecting a character voice, with optional parameters for emotional contextualizers and phonetic transcriptions. Each request produces three audio variations with distinct emotional deliveries. Characters available included multiple characters from Team Fortress 2 and My Little Pony: Friendship Is Magic, including the Mane Six and Derpy Hooves; GLaDOS, Wheatley, and the Sentry Turret from the Portal series; SpongeBob SquarePants; Kyu Sugardust from HuniePop, Rise Kujikawa from Persona 4; Daria Morgendorffer and Jane Lane from Daria; Carl Brutananadilewski from Aqua Teen Hunger Force; Steven Universe from Steven Universe; Sans from Undertale; Madeline and multiple characters from Celeste; the Tenth Doctor Who; the Narrator from The Stanley Parable; and HAL 9000 from 2001: A Space Odyssey. Silent characters like Chell and Gordon Freeman were able to be selected and would emit silent audio files when any text was submitted. Characters from Undertale and Celeste did not produce spoken words but instead generated their games’ distinctive beeps when text was entered.

Sample emoji probability distributions generated by the DeepMoji model. These emoji distributions were displayed on 15.ai as part of its technical metrics and graphs.

From 2020, 15.ai has generated audio at 44.1 kHz sampling rate—higher than the 16 kHz standard used by most deep learning text-to-speech systems of that period. This higher fidelity created more detailed audio spectrograms and greater audio resolution, though it also made any synthesis imperfections more noticeable. The system processed speech using customized deep neural networks combined with specialized audio synthesis algorithms.[55] While the underlying technology could produce 10 seconds of audio in less than 10 seconds of processing time (i.e. faster-than-real-time), the actual user experience often involved longer waits as the servers managed thousands of simultaneous requests, sometimes taking more than a minute to deliver results.[56]

Due to its nondeterministic design, 15.ai produced variations in its speech output. 15.ai introduced the concept of emotional contextualizers, which allowed users to specify the emotional tone of generated speech through guiding phrases.[57] The emotional contextualizer functionality utilized DeepMoji, a sentiment analysis neural network developed at the MIT Media Lab that processed emoji embeddings from 1.2 billion Twitter posts to analyze their emotional content. If an input into 15.ai contained additional context (specified by a vertical bar), the additional context following the bar would be used as the emotional contextualizer.[58] For example, if the input was Today is a great day!|I'm very sad., the selected character would speak the sentence “Today is a great day!” in the emotion one would expect from someone saying the sentence “I’m very sad.”[59]

An example of a conversion of the text “daisy bell” into speech, starting from English orthography. English words are parsed as a string of ARPABET phonemes, then is passed through a pitch predictor and a mel-spectrogram generator to generate audio.

15.ai used pronunciation data from Oxford Dictionaries API, Wiktionary, and CMU Pronouncing Dictionary, which uses ARPABET phonetic transcriptions. Users could input ARPABET transcriptions by enclosing phoneme strings in curly braces to correct mispronunciations. 15.ai’s interface used color-coding to indicate pronunciation certainty: green for known words, blue for manual ARPABET input, and red for algorithmically predicted pronunciations. It also displayed technical metrics, graphs, and comprehensive model analytics, which included sentiment analysis and automatic improvements to the vocoder. The platform limited its prompt to 200 characters; users could combine multiple generations for longer speech sequences.

Later versions of 15.ai introduced multi-speaker capabilities. Rather than training separate models for each voice, 15.ai used a unified model that learned multiple voices simultaneously through speaker embeddings–learned numerical representations that captured each character’s unique vocal characteristics. Along with the emotional context conferred by DeepMoji, this neural network architecture enabled the model to learn shared patterns across different characters’ emotional expressions and speaking styles, even when individual characters lacked examples of certain emotional contexts in their training data.

Reception

Critical reception

Critics described 15.ai as easy to use and generally able to convincingly replicate character voices, with occasional mixed results. Natalie Clayton of PC Gamer wrote that SpongeBob SquarePants‘ voice was replicated well, but described challenges in mimicking the Narrator from the The Stanley Parable: “the algorithm simply can’t capture Kevan Brighting‘s whimsically droll intonation.” Zack Zwiezen of Kotaku reported that “[his] girlfriend was convinced it was a new voice line from GLaDOS’ voice actor”. Taiwanese newspaper United Daily News also highlighted 15.ai’s ability to recreate GLaDOS’s mechanical voice, alongside its diverse range of character voice options.[64] Yahoo! News Taiwan reported that “GLaDOS in Portal can pronounce lines nearly perfectly”, but also criticized that “there are still many imperfections, such as word limit and tone control, which are still a little weird in some words.”[65] Chris Button of Byteside called the ability to clone a voice with only 15 seconds of data “freaky,” but also found the tech behind it impressive. Robin Lamorlette of Clubic described the technology as “devilishly fun” and wrote that Twitter and YouTube were filled with creative content from users experimenting with the tool.[67] The platform’s voice generation capabilities were regularly featured on Equestria Daily with documented updates, fan creations, and additions of new character voices. In a post introducing new character additions to 15.ai, Equestria Daily’s founder Shaun Scotellaro wrote that “some of [the voices] aren’t great due to the lack of samples to draw from, but many are really impressive still anyway.” Chinese My Little Pony fan site EquestriaCN also documented 15.ai’s development and highlighted its various updates, though they criticized some of the bugs and the long queue wait times of the application.

Multiple other critics also found the word count limit, prosody options, and English-only nature of the application as not entirely satisfactory. Peter Paltridge of Anime Superhero News opined that “voice synthesis has evolved to the point where the more expensive efforts are nearly indistinguishable from actual human speech,” but also stated that “In some ways, SAM is still more advanced than this. It was possible to affect SAM’s inflections by using special characters, as well as change his pitch at will. With 15.ai, you’re at the mercy of whatever random inflections you get.” Conversely, Lauren Morton of Rock, Paper, Shotgun praised the depth of pronunciation control—”if you’re willing to get into the nitty gritty of it”. Similarly, Eugenio Moto of Qore.com wrote that “the most experienced of users can change parameters like the stress or the tone.”[73] Takayuki Furushima of Den Fami Nico Gamer highlighted the “smooth pronunciations”, and Yuki Kurosawa of AUTOMATON wrote that its “rich emotional expression” was a major feature; both Japanese authors mentioned the lack of Japanese-language support.[74] Renan do Prado of Arkade and José Villalobos of LaPS4 remarked that while users could create amusing results in Portuguese and Spanish respectively, the generation performed best in English.[75] Chinese gaming news website GamerSky called the app “interesting”, but also criticized the word count limit of the text and the occasional lack of intonations.[76] Machine learning professor Yongqiang Li remarked in his blog that the application was still free despite having 5,000 people generating voices concurrently at the time of writing.[77] Marco Cocomello of GLITCHED remarked that despite the 200-character limitation, the results “blew [him] away” when testing the app with GLaDOS’s voice. Spanish author Álvaro Ibáñez wrote in Microsiervos that he found the rhythm of the AI-generated voices interesting and that 15.ai was able to adapt its delivery based on the text’s meaning.[79]

Technical publications provided more in-depth analysis of 15.ai’s capabilities and limitations compared to other text-to-speech technologies of the time. Rionaldi Chandraseta of Towards Data Science observed that voice models trained on larger datasets created more convincing output with better phrasing and natural pauses, particularly for extended text.[55] Bai Feng of XinZhiYuan on QQ News highlighted the technical achievement of 15.ai’s high-quality output (44.1 kHz sampling rate) despite using minimal training data, remarking that this was of significantly higher quality than typical deep learning text-to-speech implementations which used 16 kHz sampling rates. The outlet also acknowledged that while some pronunciation errors occurred due to the limited training data, this was understandable given that traditional deep learning models typically required 40 or more hours of training data.[80] Similarly, Parth Mahendra of AI Daily observed that while the system “does a good job at accurately replicating most basic words,” it struggled with more complex terms, noting that characters would “absolutely butcher the pronunciation” of certain words. Ji Yunyo of NetEase News called the technology behind 15.ai “remarkably efficient” but also criticized its emotional limitations, writing that the emotional expression was relatively “neutral” and that “extreme” emotions couldn’t be properly synthesized, making it less suitable for not safe for work applications.[81] Ji also wrote that while many deepfake videos required creators to extract and edit material from hours of original content for very short results, 15.ai could achieve similar or better effects with only a few dozen minutes of training data per character.[82]

Ellen McLain (voice of GLaDOS in Portal) and John Patrick Lowrie (voice of the Sniper in Team Fortress 2) were interviewed on The VŌC Podcast in 2021 about their perspectives on 15.ai and AI voice synthesis technology.

Some voice actors whose characters appeared on 15.ai have publicly shared their thoughts about the platform. In a 2021 interview on video game voice acting podcast The VŌC, John Patrick Lowrie—who voices the Sniper in Team Fortress 2—explained that he had discovered 15.ai when a prospective intern showed him a skit she had created using AI-generated voices of the Sniper and the Spy from Team Fortress 2. Lowrie commented:

“The technology still has a long way to go before you really believe that these are just human beings, but I was impressed by how much [15.ai] could do. You certainly don’t get the delivery that you get from an actual person who’s analyzed the scene, […] but I do think that as a fan source—for people wanting to put together mods and stuff like that—that it could be fun for fans to use the voice of characters they like.”

He drew an analogy to synthesized music, adding:

“If you want the sound of a choir, and you want the sound of an orchestra, and you have the money, you hire a choir and an orchestra. And if you don’t have the money, you have something that sounds pretty nice; but it’s not the same as a choir and an orchestra.”[video 2]

In a 2021 live broadcast on his Twitch channel, Nathan Vetterlein—the voice actor of the Scout from Team Fortress 2—listened to an AI recreation of his character’s voice. He described the impression as “interesting” and said that “there’s some stuff in there.”[video 3]

Ethical concerns

Other voice actors had mixed reactions to 15.ai’s capabilities. While some industry professionals acknowledged the technical innovation, others raised concerns about the technology’s implications for their profession. When voice actor Troy Baker announced his partnership with Voiceverse NFT, which had misappropriated 15.ai’s technology, it sparked widespread controversy within the voice acting industry. Critics raised concerns about automated voice acting’s potential reduction of employment opportunities for voice actors, risk of voice impersonation, and potential misuse in explicit content. Ruby Innes of Kotaku Australia wrote that “this practice could potentially put voice actors out of work considering you could just use their AI voice rather than getting them to voice act for a project and paying them.” In her coverage of the Voiceverse controversy, Edie WK of Checkpoint Gaming raised the concern that “this kind of technology has the potential to push voice actors out of work if it becomes easier and cheaper to use AI voices instead of working with the actor directly.”

While 15.ai limited its scope to fictional characters and did not reproduce voices of real people or celebrities, computer scientist Andrew Ng commented that similar technology could be used to do so, including for nefarious purposes. In his 2020 assessment of 15.ai, he wrote:

“Voice cloning could be enormously productive. In Hollywood, it could revolutionize the use of virtual actors. In cartoons and audiobooks, it could enable voice actors to participate in many more productions. In online education, kids might pay more attention to lessons delivered by the voices of favorite personalities. And how many YouTube how-to video producers would love to have a synthetic Morgan Freeman narrate their scripts?

While discussing potential risks, he added:

“…but synthesizing a human actor’s voice without consent is arguably unethical and possibly illegal. And this technology will be catnip for deepfakers, who could scrape recordings from social networks to impersonate private individuals.”

Legacy

A January 2021 CNN broadcast showing a viral video that used 15.ai to replace Donald Trump‘s Home Alone 2 cameo with the Heavy Weapons Guy from Team Fortress 2

15.ai was an early pioneer of audio deepfakes, and its popularity led to the emergence of AI speech synthesis-based memes during the initial stages of the AI boom in 2020. 15.ai is credited as the first platform to popularize AI voice cloning in Internet memes and content creation,[1] particularly through its ability to generate convincing character voices in real-time without requiring extensive technical expertise. The platform’s impact was especially notable in fan communities, including the My Little Pony: Friendship Is Magic, Portal, Team Fortress 2, and SpongeBob SquarePants fandoms, where it enabled the creation of viral content that garnered millions of views on social media. Team Fortress 2 content creators also used the platform to produce both short-form memes and complex narrative animations using Source Filmmaker. Fan creations included skits and fan animations, crossover content, recreations of viral videos, adaptations of fan fiction, music videos, and musical compositions. Some fan creations gained mainstream attention: a viral video that replaced Donald Trump‘s cameo in Home Alone 2: Lost in New York with the Heavy Weapons Guy‘s AI-generated voice was featured on a daytime CNN segment in January 2021. Some users integrated 15.ai’s voice synthesis with voice command software to create personal assistants.

The Tax Breaks is a 17-minute fan-made episode of Friendship Is Magic produced using character voices from 15.ai.

Its influence since its launch has been publicly recognized, with commercial alternatives like ElevenLabs[b] and Speechify emerging to fill the void after its initial shutdown. Contemporary generative voice AI companies have acknowledged 15.ai’s pioneering role. Y Combinator startup PlayHT called the debut of 15.ai “a breakthrough in the field of text-to-speech (TTS) and speech synthesis”. Cliff Weitzman, the founder and CEO of Speechify, credited 15.ai for “making AI voice cloning popular for content creation by being the first […] to feature popular existing characters from fandoms”. Mati Staniszewski, co-founder and CEO of ElevenLabs, wrote that 15.ai was transformative in the field of AI text-to-speech.

15.ai established technical precedents that influenced subsequent developments in AI voice synthesis. Its integration of DeepMoji for emotional analysis demonstrated the viability of incorporating sentiment-aware speech generation, while its support for ARPABET phonetic transcriptions set a standard for precise pronunciation control in public-facing voice synthesis tools. The platform’s unified multi-speaker model, which enabled simultaneous training of diverse character voices, allowed the system to recognize emotional patterns across different voices even when certain emotions were absent from individual character training sets; for example, if one character had examples of joyful speech but no angry examples, while another had angry but no joyful samples, the system could learn to generate both emotions for both characters by understanding the common patterns of how emotions affect speech.

15.ai also contributed to the reduction of training data requirements for speech synthesis. Earlier systems like Google AI’s Tacotron and Microsoft Research‘s FastSpeech required tens of hours of audio to produce acceptable results and failed to generate intelligible speech with less than 24 minutes of training data.[5] In contrast, 15.ai demonstrated the ability to generate speech with substantially less training data—specifically, the name “15.ai” refers to the creator’s claim that a voice could be cloned with just 15 seconds of data. This approach to data efficiency influenced subsequent developments in AI voice synthesis technology, as the 15-second benchmark became a reference point for subsequent voice synthesis systems. The original claim that only 15 seconds of data is required to clone a human’s voice was corroborated by OpenAI in 2024.

See also

  1. ^ The term “faster than real-time” in speech synthesis means that the system can generate audio more quickly than the actual duration of the speech—for example, generating 10 seconds of speech in less than 10 seconds would be considered faster than real-time.
  2. ^ which uses “11.ai” as a legal byname for its web domain

References

Notes

  1. ^ a b
    • Cocomello 2021: “However, back then if you wanted to create your own dialogue, it required layers of sound enhancements and tweaks. Thankfully, the world has evolved and now thanks to the 15.ai app, we can make […] popular characters say whatever we want”
    • MrSun 2021: 大家是否都曾經想像過,假如能讓自己喜歡的遊戲或是動畫角色說出自己想聽的話,不論是名字、惡搞或是經典名言,都是不少人的夢想吧。不過來到 2021 年,現在這種夢想不再是想想而已,因為有一個網站通過 AI 生成的技術,(transl. Have you ever imagined what it would be like if your favorite game or anime characters could say exactly what you want to hear? Whether it’s names, parodies, or classic quotes, this is a dream for many. However, as we enter 2021, this dream is no longer just a fantasy, because there is a website that uses AI-generated technology,).
    • Anirudh VK 2023: “While AI voice memes have been around in some form since ’15.ai’ launched in 2020, […]”
    • Wright 2023: “AI voice tools used to create “audio deepfakes” have existed for years in one form or another, with 15.ai being a notable example.”
    • Weitzman 2023: “It gained popularity because it was the first AI voice platform that featured an assortment of fictional characters from a variety of media sources”
    • Temitope 2024: “During this period, 15.ai earned credit for single-handedly popularizing AI voice cloning—often described as ‘audio deepfakes’—in memes, viral content, and fan-driven media.”
    • Abisola 2025: “Many credit 15.ai as the first mainstream text-to-speech platform that truly made ‘audio deepfakes’ go viral,”

  2. ^ a b
    • Lam 2022: “audio sold as an NFT on Voiceverse’s platform was acknowledged by the company for having been created by 15.ai”
    • Groth-Anderson 2022: “Voiceverse har nu indrømmet, efter en masse beskyldninger, at de har stjålet, og solgt, AI-baseret stemmeskuespil som NFT’er baseret på en stemme opfundet og designet af en tjeneste ved navn 15.ai.” (transl. “Voiceverse has now admitted, after a lot of accusations, that they have stolen, and sold, AI-based voice acting as NFTs based on a voice invented and designed by a service called 15.ai.”)
    • Parker 2022: “VoiceverseNFT previously admitted to selling voice content stolen from fifteenAI”
    • Phillips 2022: “Indeed, log files apparently showed Voiceverse NFT had used 15.ai for an AI-powered voice to be sold as an NFT.”
    • Kuchkanov 2022: “Его работу взяли и продавали как уникальный токен.” (transl. “[15.ai’s] work was taken and sold as a unique token.”)

  3. ^ a b
    • Lawrence 2022: “it was revealed that [Voiceverse] had stolen voice work it’d been using.”
    • Wright 2022: “Voiceverse NFT […] admitted to using content without permission from 15.ai”
    • Carcasole 2022: “Voiceverse NFT was caught having taken voice lines from […] 15.ai”
    • Innes 2022: “Voiceverse NFT had taken voice lines from [15.ai] without giving credit”
    • Toh 2022: “Voiceverse has admitted that they stole voice lines”
    • Muropaketti 2022: “että yhtiö käytti luvatta kilpailijan ääninäyttelyä” (transl. “[Voiceverse] used [15.ai’s] voice acting without permission”)
    • Groth-Anderson 2022: “Voiceverse har nu indrømmet […] at de har stjålet” (transl. “Voiceverse has now admitted […] that they stole”)
    • Aktaş 2022: “Troy Baker-backed NFT firm admitted using voice lines from [15.ai] without permission”
    • White 2022: “Voiceverse had stolen work without crediting it from […] 15.ai”
    • Skorich 2022: “компанию уличили в воровстве в тот же день, когда актёр объявил о сотрудничестве” (transl. the company was caught stealing on the same day the actor announced his partnership)
    • Piletsky 2022: “Вскоре в тот же день Voiceverse NFT уличили в воровстве.” (transl. “Shortly after that same day, Voiceverse NFT was caught stealing.”)
    • Lopez 2022: “Voiceverse NFT Service Reportedly Uses Stolen Technology from 15ai”
    • Baylos 2022: “la firma de NFTs ya mencionada estaría intentando sacarle partido al comercializar una muestra […] sin el permiso de su autor” (transl. “the aforementioned NFT company would be trying to take advantage of it by marketing a sample […] without the permission of its author.”)

  4. ^ a b Google 2018
  5. ^ Hacker News 2022
  6. ^ “Examples”. May 15, 2025. Retrieved May 16, 2025.
  7. ^ Chandraseta 2021; Li 2021; Temitope 2024.
  8. ^ Chandraseta 2021; Feng 2020.
  9. ^ “About”. 15.ai (Official website). March 2, 2020. Archived from the original on March 3, 2020. Retrieved December 23, 2024.
  10. ^ Коэн 2022: “Как объяснили в дискорде, маркетинговая команда так спешила сделать партнерское демо, что не дождалась создания подходящего голоса и взяла его с 15.ai.” (transl. “As explained in the Discord, the marketing team was in such a rush to make a partner demo that they didn’t wait for a suitable voice to be created and took it from 15.ai.”)
  11. ^ Wright 2022; Groth-Anderson 2022; Myrén 2022; Archer 2022; Williams 2022.
  12. ^ Paul Lehrman and Linnea Sage, et al. v. LOVO, Inc., No. 1:24-cv-03770, 38 (S.D.N.Y. 2024) (“Separately, VoiceVerse has already been found to have stolen technology from another company. See Ule Lopez, WCCF Tech, “Voiceverse NFT Service Reportedly Uses Stolen Technology from 15ai,” (Jan. 16, 2022), https://wccftech.com/voiceverse-nft-service-usesstolen-technology-from-15ai/.”).
  13. ^ “FAQ”. 15.dev. May 18, 2025. Retrieved May 18, 2025.
  14. ^ @fifteenai (May 18, 2025). “We are so back. https://15.dev Only MLP characters for now. More characters, features, and improvements will be added soon. Check Twitter and/or the Discord server (linked on the website) for updates! (Expect possible downtime as I calibrate server capacity and GPU allocations depending on how busy the website gets.)” (Tweet). Retrieved May 18, 2025 – via Twitter.
  15. ^ a b Chandraseta 2021.
  16. ^ Chandraseta 2021; Lamorlette 2021.
  17. ^ Chandraseta 2021; Temitope 2024.
  18. ^ Chandraseta 2021: “By adding a ‘|’ after the original sentence and providing an extra sentence, we could control what emotion the original sentence will be spoken with. In other words, ‘text_1|text_2’ will produce a voice line of text_1 with the emotion of text_2.”
  19. ^ Chandraseta 2021: “because it could force the bot into generating previously unknown data, such as saying ‘Today is a great day’ with a sad or angry emotion”
  20. ^ 遊戲 2021: “目前「15.ai」的網頁上,提供了不少的音源,[…]除了《傳送門》之外,15.ai 網站目前也支援了許多來自遊戲、電影或動畫中的人物語音,” (transl. “Currently, the “15.ai” website provides a lot of audio sources. […] In addition to “Portal”, the 15.ai website currently also supports voices for many characters from games, movies or animations.
  21. ^ MrSun 2021: “的 GLaDOS 也能完美的唸出任何台詞。當然網站也補充目前還有很多不完美的地方,像是字數限制、語氣控制在某些話上還是略有怪異,但只要肯花時間,也能像是其他網友一樣,通過剪輯來完成有趣的創作,” (transl. “Even GLaDOS in “Portal” can perfectly recite any lines. Of course, the website also added that there are still many imperfections, such as word limit and tone control, which are still a bit weird in some words, but as long as you are willing to spend time, you can also complete interesting creations through editing like other netizens.”)
  22. ^ Lamorlette 2021: “On peut donc retrouver sur ces réseaux de nombreux exemples de ce que peut donner le mélange entre un esprit créatif et une technologie aussi efficace que diablement amusante.” (transl. “These social networks are therefore full of examples of what can be achieved by combining a creative mind with technology that is as effective as it is devilishly fun.”)
  23. ^ Moto 2021: “Incluso, los más clavados pueden cambiar algunos parámetros como la intencionalidad o el tono.” (transl. “Actually, the most experienced of users can change some parameters like the stress or tone.”)
  24. ^
    • Furushima 2021: 日本語入力には対応していないが、ローマ字入力でもなんとなくそれっぽい発音になる。; 15.aiはテキスト読み上げサービスだが、特筆すべきはそのなめらかな発音と、ゲームに登場するキャラクター音声を再現している点だ。 (transl. It does not support Japanese input, but even if you input using romaji, it will somehow give you a similar pronunciation.; 15.ai is a text-to-speech service, but what makes it particularly noteworthy is its smooth pronunciation and the fact that it reproduces the voices of characters that appear in games.)
    • Kurosawa 2021: “もうひとつ15.aiの大きな特徴として挙げられるのが、豊かな感情表現だ” (transl. “Another major feature of 15.ai is its rich emotional expression.”)
    • Kurosawa 2021: “英語版ボイスのみなので注意” (transl. “Please note that this is an English voice only version.”)

  25. ^
    • do Prado 2021: “Obviamente o programa funciona no idioma inglês, mas dá pra gerar umas frases bem emboladas e engraças em português, estilo aqueles memes usando vozes em outros idiomas falando em português.” (transl. “Obviously, the program works in English, but you can generate some really confusing and funny sentences in Portuguese, like those memes using voices in other languages speaking Portuguese.”)
    • Villalobos 2021: “En este sentido, en las últimas horas se ha hecho popular un sitio web que emula la voz de GlaDOS para que diga todas las palabras que quieras, siempre y cuando estén en inglés, aunque puedes escribir algo en español e intentará pronunciarlo, pero no lo hará correctamente.” (transl. “In this sense, in recent hours a website has become popular that emulates the voice of GlaDOS so that it says all the words you want, as long as they are in English, although you can write something in Spanish and it will try to pronounce it, but it will not do it correctly.”)

  26. ^
    • GamerSky 2021: “虽然AI的声音缺少了些抑扬顿挫,不过效果也还算有趣。” (transl. “Although the AI’s voice lacks some intonation, the effect is still interesting.”)
    • GamerSky 2021: “目前15.ai提供的角色选项较少,由于文本的字数限制,生成的语音也相对较短” (transl. “Currently, 15.ai provides relatively few character options, and due to the word limit of the text, the generated voice is relatively short.”)

  27. ^ Li 2021: “该网站的访问量为在线任务差不多5000以上,而且目前完全免费,” (transl.: “The number of requests to the website is more than 5,000 tasks, and it is still currently completely free.”)
  28. ^ Ibáñez 2022: “Personalmente encontré interesantes las pausas y el ritmo y que ciertamente se nota que según el contenido del texto se «interpreta» el resultado según lo que se intenta transmitir.” (transl. “Personally, I found the pauses and rhythm interesting, and that it is certainly noticeable that depending on the content of the text, the result is ‘interpreted’ according to what is being trying to convey.”)
  29. ^
    • Feng 2020: “该工具生成的音频文件的采样率为 44100 Hz,而大多数基于深度学习的文本转语音实现,所使用的采样率为16,000 Hz。所以用它产生的音频,声谱会更详细(更高质量的音频),同时缺陷也更明显。” (transl. “The audio files generated by this tool have a sampling rate of 44100 Hz, while most deep learning-based text-to-speech implementations use a sampling rate of 16,000 Hz. Therefore, the audio generated by it will have a more detailed sound spectrum (higher quality audio), but the defects will be more obvious.”)
    • Feng 2020: “当然在这么小的语料上训练的模型也是有缺陷的,有些单词可能发音不准确,其实这也很好理解,即使是人,在遇到生词的时候也不一定能准确发音,而传统的深度模型通常有 40 个小时或者更多的语料,所以错误率会低一些。” (transl. “Of course, the model trained on such a small corpus is also flawed, and some words may not be pronounced correctly . In fact, this is easy to understand. Even humans may not be able to pronounce new words accurately when they encounter them. Traditional deep models usually have 40 hours or more of corpus, so the error rate will be lower.”)

  30. ^ Ji 2021: “但是由于情绪表现只能联系上下文进行自动识别,导致这些语音在情感表达上比较“中庸”,一些“极端”的情绪无法通过语音合成正常表达,[…]距离其被正式用于某些NSFW的同人作品,还有很长的路要走。” (transl. “the emotional expression can only be automatically recognized in the context, which makes these voices relatively “neutral” in emotional expression. Some “extreme” emotions cannot be expressed normally through voice synthesis. […] it still has a long way to go before it can be officially used in some NSFW fan works.”)
  31. ^ Ji 2021: “网友在油管上看到的许多“深度伪造”视频,都依赖视频创作者从原本数小时的数据资料里进行提取编辑,最终才能制作非常简短的内容,并且呈现效果还很一般。而15.ai的开发者表示,自己的这项技术可以轻松实现那些视频效果(事实上15.ai的许多角色进行深度学习的数据时长只有几十分钟)。” (transl. “Many of the “deep fake” videos that netizens see on YouTube rely on video creators to extract and edit hours of data to produce very short content, and the presentation effect is still very average. The developers of 15.ai said that their technology can easily achieve those video effects (in fact, the data for deep learning of many characters of 15.ai is only tens of minutes long).”)

Tweets

Videos

Works cited

Leave a Comment

Your email address will not be published. Required fields are marked *

Exit mobile version