Popular Acts To Use HYBE’s AI Tech? Label Says Fan Reaction Is Key
In the contemporary music landscape, HYBE has been causing a stir with a secret project centered around artificial intelligence (AI). Leveraging the power of AI, HYBE is redefining music production by merging a South Korean singer’s voice with native speakers in five other languages.
The pioneering initiative made its debut in May, as HYBE released a six-language track by singer MIDNATT. The track titled “Masquerade” was rolled out in Korean, English, Spanish, Chinese, Japanese, and Vietnamese simultaneously — an unprecedented move in the industry.
Previously, K-Pop singers, including members of HYBE’s own roster, have released songs in English and Japanese alongside Korean. However, the advent of this new technology has allowed HYBE to push the envelope further, embarking on simultaneous releases in six languages.
The next step? According to Chung Woo Yong — the head of HYBE’s interactive media arm — whether the tech will be used by more popular acts or not all lies in the hands of the fans. And HYBE has plenty of fans to listen to, as they house massive acts such as BTS, LE SSERAFIM, NewJeans, ENHYPEN, etc.
We would first listen to the reaction, the voice of the fans, then decide what our next steps should be.
— Chung Woo Yong
Behind the scenes, the process is an intricate one. “We divided a piece of sound into different components – pronunciation, timbre, pitch, and volume,” Chung shared. In a before-and-after comparison provided to Reuters, a subtlety like the elongated vowel sound added to the word “twisted” in the English lyrics for a more natural effect exemplifies the technology’s nuance. Interestingly, there was no detectable change made to the singer’s voice.
But what brings this natural-sounding outcome? It’s all thanks to deep learning technology developed by Supertone — a company HYBE acquired for a whopping ₩45.0 billion KRW (about $35.0 million USD) earlier this year. Supertone’s chief operating officer, Choi Hee-doo, asserted that deep learning powered by the Neural Analysis and Synthesis framework (NANSY) makes the song sound more organic than using non-AI software.
The artist at the center of this innovation, MIDNATT, sees AI as a tool that enables him to broaden his artistic scope.
I feel that the language barrier has been lifted and it’s much easier for global fans to have an immersive experience with my music.
— MIDNATT
Although the use of AI in music is not entirely new, the manner in which HYBE is harnessing it is certainly innovative. For now, the company’s pronunciation correction technology takes “weeks or months” to refine a song, but Choi Jin Woo, the producer of “Masquerade,” suggested that as the process accelerates, the technology could be used in wider applications such as real-time translation in video conferences.
While the excitement around this technological innovation is palpable, the use of AI in music has been met with mixed feelings from fans. While some embrace it as an exciting progression in the industry, others express reservations about the potential loss of authenticity and organic creativity in music. Thus, if HYBE’s statements about valuing fan reactions are indeed genuine, the next stage in their AI technology may be on hold for a while longer.