TL;DR: The advent of AI based, LLM coding applications like Anthropic’s Claude and ChatGPT have prompted maintainers to experiment with integrating LLM contributions into open source codebases.
This is a fast path to open source irrelevancy, since the US copyright office has deemed LLM outputs to be uncopyrightable. This means that as more uncopyrightable LLM outputs are integrated into nominally open source codebases, value leaks out of the project, since the open source licences are not operative on public domain code.
That means that the public domain, AI generated code can be reused without attribution, and in the case of copyleft licences - can even be used in closed source projects.



Is it a separate question, though?
Both works are copyrighted, one is just copyrighted as “all rights reserved” (our leaked commercial code) and the rest is licensed as LGPL. We’re putting both pieces of code inside the LLM and then asking the LLM to make a new version.
What makes the action of leaking different from the act of putting it on the web? Rights are reserved in either case.
Well, people are contributing to copyleft codebases expecting that when people build on their work, that work (the derivative works) are also licensed in the same way. You don’t need to fork for the value to be lost. People expected virality to be part of their contribution, and clearly the new derivative works are partially non-copyleft.
Beyond that, as more of the codebase is LLM produced, the less of it is protected by the copyleft license, until we have a ship of Theseus situation where the codebase is available, but no longer copyleft. That is clearly not what was intended by e.g. the GPL. Just look at the Stallman quote in post.