As artificial intelligence has evolved and entered mainstream use — with AI filters invading every corner of social media, and AI writing tools churning out everything from student essays to Sports Illustrated articles — there have been multiple instances of public outcry about the alleged plagiarism these tools encourage.
Consider, for instance, the protests that erupted when Christie’s Auction House announced an AI art auction in February of 2025; more than 6,000 artists signed an open letter calling for the cancellation of the auction, citing concerns that the artwork was incentivizing the “mass theft of human artists’ work.”
“These [AI] models, and the companies behind them, exploit human artists,” read the letter. “Using their work without permission or payment to build commercial AI products that compete with them.” And yet, even as these protests pick up steam, and AI-related copyright infringement lawsuits continue to spring up around the world, as of right now the laws surrounding AI copyright infringement are still undefined and evolving – much like the technology itself.
Where will these legal battles go in the future? If the United States does rule that the use of Artificial Intelligence tools constitutes copyright infringement, how will that impact students using these AI tools in High School or Higher Education – spaces which rigidly enforce anti-plagiarism rules? Are we on the brink of major AI restrictions across the country?
While plagiarism itself is an ethical concept and not a legally enforced offense, copyright infringement is. Copyright law in the United States is a complex system that intends to protect the expression of original works by an artist or creator, and it is within this system that Artificial Intelligence has faced its greatest hurdles so far.
The biggest development in the copyright battle against AI actually began two years before the generative AI boom, when the tech conglomerate Thomson Reuters sued a legal AI startup called Ross Intelligence back in 2020. Thomson Reuters would go on to win the lawsuit in 2025, when U.S. Circuit Judge Stephanos Bibas ruled that Ross Intelligence was not allowed to copy information and content from Reuters – marking the first major blow in the concept of “fair use” in AI.
“Fair use” is a complex foundational concept that companies like OpenAI and Meta Platforms use to justify their services, claiming that their AI services are studying AI copyrighted materials to create new content, while opponents claim these companies are stealing to directly compete with them.
As mentioned, the notion of “fair use” in AI is still open to legal interpretation, though it’s possible we will see a definitive ruling on this hotly-contested topic in one of the country’s ongoing AI Copyright lawsuits, such as Advance Local Media v. Cohere. In this case, a group of news publishers including Conde Nast, The Atlantic, and Vox, alleged copyright infringement against Cohere Inc. – claiming that the company used their copyrighted publications to build and operate its AI. Because this case constitutes multiple allegations of copyright infringement and Lanham Act violations, the ending to Advance Local Media v. Cohere may be the first ruling that definitively restricts “fair use” in AI.
These cases demonstrate how AI plagiarism is not illegal yet, but as more and more cases are settled, we may see an increased crackdown on AI usage in art, in professional writing, and in schoolwork. In the future, use of AI for schoolwork may open you up to copyright infringement as well as plagiarism, and it’s important to understand the safe, legal measures we can take to use this technology correctly.
So, what can be done to differentiate plagiarism from using AI in a responsible way? Originality.ai explains that there are still many ways to use AI tools “responsibly” during the content creation process, simply by taking care to recognize the copyright implications of your work. One method for doing so is extensively citing your sources when writing essays or completing assignments, since AI often doesn’t cite its sources directly during content generation. If AI does not cite sources of information that you wanted to include, it might even be best to leave out those facts entirely.
Not only that, but AI should always be used as an assistant to your writing, rather than the author of what should be your own work. AI should never entirely replace your writing, but rather offer suggestions and additions to your content, or help you proofread what you have done. As our society moves towards greater AI integration in all walks of life (and with legal crackdowns on AI looming in the future) it’s essential that we use these tools purely to enhance our work, not to replace it.
The use of AI plagiarism checkers can also be helpful. These can be used to confirm that you are not including plagiarized content, but instead using AI in an ethical way. By following these steps when using AI in your work, you can be sure that you are not plagiarizing others, establishing a baseline for ethical AI use in advance of any upcoming legal settlements.
Looking to the future, it’s very possible that these AI tools find themselves at the center of large copyright infringement lawsuits and restrictions, and as students we need to prepare ourselves for that eventuality. In studying the greater legal implications of AI, we can not only protect ourselves from plagiarism, but elevate our own work by refusing to take advantage of an ethically-ambiguous shortcut.