
German court rules in favour of music rights management organisation against OpenAI
On 11 November 2025, the 42nd Civil Chamber of the Munich Regional Court I (Landgericht München I), which specialises in copyright matters, ruled in favour of the collective management organisation GEMA against two companies of the OpenAI group over the use of song lyrics in ChatGPT.
The case dates back to the hearing held on 29 September, during which both sides presented their arguments to the court. GEMA represented the authors and publishers of nine renowned German songs, including 'Atemlos' by Kristina Bach and 'Wie schön, dass du geboren bist' by Rolf Zuckowski. The defendants were OpenAI, L.L.C. and OpenAI Ireland Ltd.
According to the copyright management organisation, OpenAI’s models had learned and retained fragments of these lyrics so precisely that they could produce recognisable sections in response to simple user requests. This was not merely the result of statistical influence arising from training, but rather the preservation of specific elements of protected songs. Therefore, GEMA argued that if ChatGPT reproduces identifiable fragments of existing works, even in generative conversations, it is using content that requires authorisation. However, OpenAI offered a very different perspective. The company claimed that its models do not store lyrics or maintain internal databases; rather, they operate by detecting statistical patterns in large sets of text and generating probabilistic outputs. Furthermore, it stated that the results depend heavily on user prompts and that the system does not “copy” works, but rather attempts to generate new text. Moreover, OpenAI emphasised that training AI models necessarily involves processing large quantities of information, a permitted practice, and that the coincidences highlighted by GEMA did not demonstrate the literal reproduction of protected works.
The key legal question for the court was whether incorporating song fragments into an AI model could be considered reproduction under Article 2 of Directive 2001/29/EC (the InfoSoc Directive) and § 16 of the German Copyright Act (UrhG). On 11 November, the Chamber concluded that it could. It explained that the law protects reproduction 'by any means and in any form', including fixations that are not directly perceptible, such as those stored in the statistical parameters of a generative model. Therefore, the court held that the “memorisation” of lyrics by models 4 and 4o constituted a protected act of reproduction. In addition, the judges noted that when ChatGPT produced passages that clearly incorporated original elements of the songs, this constituted not only a new reproduction, but also an act of communication to the public, as defined in Article 3 of the InfoSoc Directive and § 19a UrhG. Thus, the OpenAI companies could be held liable, as the fragments appeared even in response to basic prompts and were dependent on the system's training and design rather than any creative input from the user.
Another important issue was whether these uses were covered by the text and data mining exception set out in Article 4 of Directive (EU) 2019/790 (the DSM Directive) and its transposition into German law in § 44b UrhG. The German court held that they were not. The court considered that the exception permits temporary or preparatory copies necessary for analysing large quantities of information, such as RAM loads or format conversions, but does not allow the long-term incorporation of works into the model. According to the Chamber, this permanent fixation affects the normal exploitation of the songs and clearly falls outside the scope of the exception. Similarly, the court rejected the idea that the fragments could be considered incidental inclusion under § 57 UrhG, and denied any implied consent by rights holders for this kind of use.
Lastly, the court ordered OpenAI to cease using the protected content, provide GEMA with details of how their work was used and stored within the system, and compensate them for the damages caused. However, it should be noted that this decision may be appealed, so it remains to be seen whether higher courts will uphold this approach, or whether other EU courts will adopt similar reasoning in future litigation.
NYT vs OpenAI: dispute over 20 million anonymised chats
The copyright dispute between The New York Times and OpenAI, which was brought before the U.S. District Court for the Southern District of New York in December 2023, has recently taken a significant turn. This year, the case was consolidated into the multidistrict litigation The New York Times Company v. Microsoft Corporation (1:23-cv-11195), overseen by Judge Sidney H. Stein and Magistrate Judge Ona T. Wang. It involves several media companies suing the tech firms OpenAI and Microsoft for allegedly reproducing protected articles within generative models.
On 12 November 2025, OpenAI filed an urgent motion asking Judge Stein to revoke an order issued by Magistrate Wang requiring the company to provide approximately twenty million anonymised ChatGPT conversation records. The order stipulates that the records must be provided under strict confidentiality measures, including de-identification, and access must be limited to legal teams only. According to Magistrate Wang, these safeguards are sufficient to protect user privacy, which is why she approved the measure.
However, OpenAI argues that producing the data would be excessive and risky, even with these protections. The company claims that such a large volume of anonymised data could enable user re-identification or expose sensitive information. OpenAI also states that the order is disproportionate and unnecessary, as "99.99% of the conversations are irrelevant" to the allegations of the reproduction of Times content. In a public statement, the company reinforced its position, stating that the court is authorising a 'fishing expedition' that could affect millions of individuals not involved in the litigation. In contrast, for the Times and the other media plaintiffs, access to the anonymised records is essential in order to determine the circumstances in which OpenAI models might generate passages that substantially match their articles, and the instructions under which this occurs. They argue that only a large dataset can reveal patterns, design flaws or cases of 'memorisation', meaning the internal retention of specific passages from protected works.
It is worth noting that this dispute over the scope of discovery arose at an important stage in the proceedings. In April 2025, Judge Stein permitted the core elements of the Times' complaint to proceed, having rejected several motions to dismiss filed by OpenAI and Microsoft. The following month, Magistrate Wang issued a preservation order due to concerns that certain records might be deleted, leading the court to hold a dedicated conference on the matter. Judge Stein must now decide whether to uphold Magistrate Wang’s order or limit it, balancing the need for evidence with the protection of third-party data.
Details
- Publication date
- 14 November 2025
- Author
- European Innovation Council and SMEs Executive Agency