The new experimental work of Adobe Research is set to change the way people create and edit custom audio and music. Project Music GenAI Control is a music generation and editing tool Generative AI in the initial phase. It allows creators to generate music from text instructions and then have fine-grained control to tweak the audio to their precise needs.
Adobe uses AI for personalized audio
Adobe has a decades-long legacy of innovation in artificial intelligence. Firefly, Adobe’s family of generative AI models, has become the most popular AI image generation model designed for safe commercial use, in record time, globally. To date, Firefly has been used to generate over 6 billion images.
Adobe is committed to ensuring that technology is developed in line with the ethical principles of trust, responsibility and transparency related to artificial intelligence. All content generated with Firefly automatically includes content credentials. They are “nutrition labels” for digital content, which remain associated with the content wherever it is used, published or stored.
How it works Project Music GenAI Control
The new tools start with a text message inserted into a generative artificial intelligence model, a method that Adobe already uses in Firefly. A user enters a text message, such as “powerful rock”, “happy dance” or “sad jazz” to generate music. Once the tools generate the music, fine-grained editing is integrated directly into the workflow.
With a simple user interface, users can transform the generated audio based on a reference melody; adjust the tempo, structure and repetitive patterns of a piece of music; choose when to increase and decrease the intensity of the audio; extend the length of a clip; reshuffle a section; or generate a perfectly repeatable loop.
Instead of manually cutting existing music to create intros, outros and background audio, Project Music GenAI Control can help users create exactly the songs they need, solving end-to-end workflow pain points.
Leave a Reply
View Comments