ByteDance High-Resolution AMT System Achieves SOTA in Piano Note and Pedal Transcription
ByteDance introduces a high-resolution piano transcription system trained by regressing the precise onset and offset times of piano notes and pedals.
AI Technology & Industry Review
ByteDance introduces a high-resolution piano transcription system trained by regressing the precise onset and offset times of piano notes and pedals.
In an effort to enrich resources for multispeaker singing-voice synthesis, a team of researchers from the University of Tokyo has developed a Japanese multispeaker singing-voice corpus.
With recent developments in artificial intelligence and automation in machines, robots are advancing into previously unexplored industries such as music and entertainment.
Hundreds of artificial intelligence researchers, UN staff and curious locals listened, watched and tapped their feet as London-born composer and human beatboxer Reeps One “battled” against an AI-powered real-time music generator trained on his own riffs.
Now, China’s elite Central Conservatory of Music (CCOM) has announced it is recruiting PhDs for a new Music AI and Information Technology program. CCOM says prospective students should have a background in Computer Science, AI, or Information Technology; along with musical abilities (instrument playing or singing).
Magenta Studio is a Google Brain project “exploring the role of machine learning as a tool in the creative process.” The Google Brain team created the open-source music-making package using machine learning models.
As artificial intelligence matures so does its potential in the creative industries — one of which happens to be music production. Although AI is not about to top the hit charts any time soon, algorithms are already creating, performing and even monetizing their own musical compositions. Synced took a look into current AI music techniques and projects from tech giants and startups alike.
The “cocktail party effect” describes humans’ ability to hold a conversation in a noisy environment by listening to what their conversation partner is saying while filtering out other chatter, music, ambient noises, etc.
Google has announced the release of MusicVAE, a machine learning model that makes composing musical scores as easy as mixing paint on a palette. A breakthrough from Google Brain’s Magenta Project, MusicVAE generates and morphs melodies to output multi-instrumental passages optimized for expression, realism and smoothness which sound convincingly like human-composed music.
Chinese netizens are all ears for the company’s “hearty” AI-powered music recommendations. In an interview with Synced, NetEase Data Scientist Jia Xu and Product Manager Bowen Shen explained the NetEase system, which learns how to predict what songs will resonate with a user’s particular taste in music…