Sponsored Content

Opinion by: Dzmitry Saksonau, CEO of JGGL.

The music industry recently closed one of its most consequential eras in decades. Warner Music settled its copyright lawsuit with Udio in November 2025 and signed a licensing deal for a new AI music platform.

Days later, Warner struck a similar agreement with Suno, the most popular AI music generator, with over 100 million users and a $2.45-billion valuation.

All three major labels now have licensing agreements with the AI platforms they sued just a year ago.

By Grammy Week 2026, the conversation had shifted. Recording Academy CEO Harvey Mason Jr. admitted that every producer he knows already uses AI in the studio and called AI policy “the toughest part of my job.”

He’s not the only one who shares that sentiment. Artists want to create with these tools, but they also don’t want their work strip mined without consent or compensation.

As AI becomes a default tool in studios, these deals expose cracks in attribution, ownership and compensation that licensing alone cannot fix. If music is entering an “open studio” era, the industry needs solutions built into the very foundation of creation.

Licensing deals don’t scale for what comes next

Licensing works when creation is centralized and outputs are clearly defined. A label signs a deal with a platform, the platform trains on approved catalogs, and artists opt in to have their voices and compositions used.

That model handles the present, but it does not handle the future.

AI-assisted music is fluid — remixes, iterations and collaborations happen constantly across tools, platforms and communities. A single track might pass through three AI models, two human producers and a remix chain before it reaches an audience.

The Suno-Warner deal already exposed one crack. After the agreement, Suno quietly revised its rights and ownership terms. Language that previously told subscribers “you own the songs” disappeared.

The updated policy now states that users are “generally not considered the owner” of their outputs, even with paid commercial licenses. Ownership, it turns out, is the part that licensing deals struggle to define.

The numbers make the scale problem obvious. Suno alone has 100 million users. You cannot negotiate bespoke agreements for every creative interaction in that ecosystem. The model breaks under its own weight.

The real conflict is about attribution

Too much of the AI-music debate focuses on humans versus machines when the real problem is something else entirely.

It’s not that AI will replace artists in any way. The problem is that nobody can reliably track who created what or who should get paid.

Lose track of who created what, and the money stops flowing to the right people. Once that happens, trust disappears, even if every tool is properly licensed.

We’ve seen a similar pattern play out when streaming became popular. Streaming gave people access to music, and that part was fine. The damage came from opaque value flows that left artists unable to track where their money went.

The same thing happened during the user-generated content fights of the 2010s. Whenever music becomes more accessible without a transparent money trail, creators get burned.

The NO FAKES Act, reintroduced to Congress in April 2025 with bipartisan support from legislators and backing from OpenAI, YouTube and all three major labels, tries to address part of this.

Recent: AI centralization, the future of the AI workforce and AI music agents

The bill would establish federal protections against unauthorized AI-generated replicas of a person’s voice or likeness. Legislation protects, however, after the damage is done. It doesn’t prevent the breakdown in the first place.

Without transparent systems baked into the creation process, openness will always feel like exploitation to the people who make the music.

Infrastructure can prevent disputes

Smart contracts can encode royalty splits into the song file itself. When a track sells or streams, payment executes automatically. A three-person band with a 40-30-30 split receives those percentages instantly. There is no label holding funds for 90 days. There are no quarterly statements. There can be no dispute over who owns what percentage. The transaction is recorded on a public ledger. Any collaborator can verify that their share of the royalties hit their wallet.

The bigger advantage is provenance. Blockchain allows creative works to carry their ownership record as they move across platforms. When a track passes through AI models, remix chains and distribution channels, that record travels with it.

The current system can’t do this. Metadata gets stripped, credits get lost, and payments arrive months late, if they arrive at all.

Done right, this infrastructure enables what licensing deals never will: a creative environment where artists remix, build on and share each other’s work without losing ownership along the way. Where fans have a real stake in the creative process and where AI tools improve what artists create.

The window to get this right is closing

AI-assisted creation has quietly become the default mode of music production, and the industry now faces a familiar choice. It can keep layering more rules onto outdated systems, or it can rebuild the foundation for how music is made and shared.

The Suno-Warner deal is a good starting point, but it’s not enough by itself.

AI is not the existential risk the industry keeps treating it as — the systems trying to contain it are. Licensing deals are a good start, but they were never designed to carry this much weight. The industry needs infrastructure that makes compensation as automatic and fluid as the creative process itself.

If music is truly entering an open-studio era, the industry must build systems that trust creators and make that trust enforceable by design.

Opinion by: Dzmitry Saksonau, CEO of JGGL.