OpenAI claims New York Times misused ChatGPT to fabricate lawsuit evidence

OpenAI claims New York Times misused ChatGPT to fabricate lawsuit evidence

Tech Journalist

    OpenAI has requested a federal judge to dismiss parts of a copyright lawsuit filed by The New York Times, accusing the newspaper of employing deceptive tactics to generate misleading evidence, according to a recent Reuters report. The lawsuit, which centers around the alleged unauthorized use of the Times’ copyrighted material to train OpenAI’s artificial intelligence systems, including the popular ChatGPT, has sparked a heated debate over the boundaries of copyright law and AI technology.

    OpenAI’s defense, articulated in a recent filing in Manhattan federal court, argues that The New York Times contravened OpenAI’s terms of use by using “deceptive prompts” to force the AI to reproduce the newspaper’s content. OpenAI contends that this strategy was designed to create evidence for The New York Times’ lawsuit, undermining the integrity of the legal process. The filing criticizes the Times for not adhering to its own high journalistic standards, suggesting that the newspaper hired an external party to manipulate OpenAI’s products deliberately.

    At the heart of this legal battle is the controversial question of whether AI’s training on copyrighted materials constitutes fair use — a principle that allows limited use of copyrighted material without permission for purposes such as news reporting, teaching, and research. Tech companies, including OpenAI, argue that their AI systems’ usage of copyrighted content is a fair use, essential for the development of AI technologies that could potentially shape a multitrillion-dollar industry. However, copyright owners, including The New York Times, contend that such practices infringe on their copyrights, unduly benefiting from their extensive investments in original content.

    Judicial precedents and the future of AI

    The case against OpenAI and its primary financial supporter, Microsoft, is part of a broader trend of copyright lawsuits targeting tech companies over AI training practices. However, courts have yet to provide a clear verdict on the fair use question in the context of AI, with some infringement claims being dismissed due to insufficient evidence of AI-generated content resembling copyrighted works closely.

    OpenAI’s filing emphasizes the challenges in using ChatGPT to systematically reproduce copyrighted articles, arguing that the instances cited by the Times were anomalies resulting from extensive manipulation. The company also posits that AI models acquiring knowledge from various sources, including copyrighted materials, is inevitable and cannot be legally prevented, drawing a parallel with traditional journalistic practices of re-reporting news.

    As the lawsuit progresses, the outcome could have profound implications for the future of AI development and the application of copyright law in the digital age. A ruling in favor of OpenAI could solidify the legal standing of AI’s fair use of copyrighted materials, potentially accelerating the growth of AI technologies. Conversely, a decision favoring The New York Times could impose new limitations on how AI can be trained, impacting the evolution of AI capabilities and the tech industry’s trajectory.

    The post OpenAI claims New York Times misused ChatGPT to fabricate lawsuit evidence appeared first on ReadWrite.


    Maxwell Nelson

    Tech Journalist

    Maxwell Nelson, a seasoned journalist and content strategist, has contributed to industry-leading platforms, weaving complex narratives into insightful articles that resonate with a broad readership.