SEOUL, July 19 (Reuters) – In a dimly lit recording studio in Seoul, producers at the K-pop audio label that introduced the entire world hit boy group BTS are making use of synthetic intelligence to meld a South Korean singer’s voice with those of indigenous speakers in 5 other languages.
The engineering enabled HYBE (352820.KS), South Korea’s most significant audio label, to launch a monitor by singer MIDNATT in six languages – Korean, English, Spanish, Chinese, Japanese and Vietnamese in Might.
Some K-pop singers have unveiled music in English and Japanese in addition to their native Korean, but implementing the new know-how for a simultaneous six-language release is a world-wide 1st, in accordance to HYBE, and could pave the way for it to be employed by extra well-liked acts.
“We would to start with pay attention to the reaction, the voice of the followers, then choose what our following steps must be,” mentioned Chung Wooyong, the head of HYBE’s interactive media arm in an interview at the company’s studio.
Lee Hyun, 40, recognised as MIDNATT, who speaks only minimal English and Chinese in addition to Korean, recorded the music “Masquerade” in every language.
Indigenous speakers study out the lyrics, and afterwards the two were being seamlessly put together with the help of HYBE’s in-residence AI audio know-how, Chung stated.
The track is the hottest sign of the increasing influence of AI in the tunes market at a time when the Grammy Awards have released new principles for the technology’s use and AI-produced mash-ups of tunes are flooding social media.
“We divided a piece of sound into various factors – pronunciation, timbre, pitch and quantity,” Chung stated. “We appeared at pronunciation which is connected with tongue movement and utilised our creativeness to see what sort of outcome we could make employing our know-how.”
In a just before-and-soon after comparison demonstrated to Reuters, an elongated vowel audio was additional to the word “twisted” in the English lyrics, for instance, to sound far more normal although no detectable transform was manufactured to the singer’s voice.
Using deep mastering powered by the Neural Investigation and Synthesis (NANSY) framework produced by Supertone tends to make the music sound additional organic than employing non-AI application, Supertone main running officer Choi Hee-doo said.
HYBE announced the 45 billion won ($36 million) acquisition of Supertone in January. HYBE explained it planned to make some of the AI engineering utilised in MIDNATT’s song obtainable to creators and the general public, but did not specify if it would charge service fees.
‘IMMERSIVE EXPERIENCE’
MIDNATT explained applying AI experienced allowed him a “broader spectrum of artistic expressions.”
“I sense that the language barrier has been lifted and it’s much a lot easier for world wide fans to have an immersive experience with my songs,” he said in a assertion.
Although the technological innovation is not new, it is an progressive way to use AI in audio, explained Valerio Velardo, director of The Audio of AI, a Spain-centered consulting company for AI tunes and audio.
Not only skilled musicians but also a wider populace will reward from AI new music technology in the extended expression, Velardo explained.
“It really is likely to lessen the barrier of audio development. It is a small bit like Instagram for images but in the circumstance of audio.”
For now, HYBE’s pronunciation correction technology can take “months or months” to do its job but when the procedure speeds up, it could provide a wider variety of applications these as interpreting in video clip conferences, reported Choi Jin-woo, the producer of MIDNATT’s “Masquerade” who goes by the name Hitchhiker.
Reporting by Hyunsu Yim Supplemental reporting by Daewoung Kim and Hyun Youthful Yi Enhancing by Josh Smith and Jamie Freed
Our Criteria: The Thomson Reuters Have faith in Ideas.