To comply with copyright law, not to skirt it. That’s what companies that scan large numbers of books do. See for example Authors Guild v. Google from back when Google was scanning books to add to their book search engine. Framing this like it’s some kind of nefarious act is misleading.
They also weren’t destroying rare books. They were buying in-print books from major retailers, which means that while yes, that is environmentally wasteful, it’s not actually destroying books in the classical destruction of knowledge sense since the manufacturer will just print another one if there’s demand for it.
This as well. Growing up in a house of book lovers, myself included, destroying a book was akin to kicking a puppy. Realistically though, they’re ultimately consumables. They’re meant to be bought, used, and replaced as needed. With luck the destruction included recycling as much as possible, seeing as it’s mainly paper.
Precisely, there’s a reason that these days, books made for libraries are made to an entirely different standard than books sold at your local book store.
Yeah, this is on the way of being a win. In this case they actually bought the books, which has been one of the biggest issues with LLMs. There’s certainly more discussion to be had around how they use the materials in the end, but this is a step in the right direction.
To a certain extent I agree, but you can buy a book and still commit copyright infringement by copying its contents (for use other than personal use)
If this would go to court, it would depend on whether training an LLM model is more akin to copying or learning. I can see arguments for either interpretation, but I suspect that the law would lean more toward it being copying rather than learning
The lawsuit between NYT and OpenAI is still ongoing, this article is about a court order to “preserve evidence” that could be used in the trial. It doesn’t indicate anything about how the case might ultimately be decided.
Last I dug into the NYT v. OpenAI case it looked pretty weak, NYT had heavily massaged their prompts in order to get ChatGPT to regurgitate snippets of their old articles and the judge had called them out on that.
To comply with copyright law, not to skirt it. That’s what companies that scan large numbers of books do. See for example Authors Guild v. Google from back when Google was scanning books to add to their book search engine. Framing this like it’s some kind of nefarious act is misleading.
They also weren’t destroying rare books. They were buying in-print books from major retailers, which means that while yes, that is environmentally wasteful, it’s not actually destroying books in the classical destruction of knowledge sense since the manufacturer will just print another one if there’s demand for it.
This as well. Growing up in a house of book lovers, myself included, destroying a book was akin to kicking a puppy. Realistically though, they’re ultimately consumables. They’re meant to be bought, used, and replaced as needed. With luck the destruction included recycling as much as possible, seeing as it’s mainly paper.
Precisely, there’s a reason that these days, books made for libraries are made to an entirely different standard than books sold at your local book store.
Yeah, you have millions of old books that nobody wants not even collectors. It’s not just popular literature.
Yeah, this is on the way of being a win. In this case they actually bought the books, which has been one of the biggest issues with LLMs. There’s certainly more discussion to be had around how they use the materials in the end, but this is a step in the right direction.
To a certain extent I agree, but you can buy a book and still commit copyright infringement by copying its contents (for use other than personal use)
If this would go to court, it would depend on whether training an LLM model is more akin to copying or learning. I can see arguments for either interpretation, but I suspect that the law would lean more toward it being copying rather than learning
There’s already been a summary judgment in this case ruling that the AI training activity was not by itself copyright violation.
This isn’t an automatic complete win for them.
Being allowed to train under fair use rules doesn’t mean you’re protected if your LLM still regurgitates content.
https://arstechnica.com/tech-policy/2025/07/nyt-to-start-searching-deleted-chatgpt-logs-after-beating-openai-in-court/
The lawsuit between NYT and OpenAI is still ongoing, this article is about a court order to “preserve evidence” that could be used in the trial. It doesn’t indicate anything about how the case might ultimately be decided.
Last I dug into the NYT v. OpenAI case it looked pretty weak, NYT had heavily massaged their prompts in order to get ChatGPT to regurgitate snippets of their old articles and the judge had called them out on that.
I see. In that case I stand corrected.