Overview
AI detectors can be useful tools in a university's toolbox when evaluating whether a student may have used generative AI (GenAI) in a paper, examination, or other assignment in violation of applicable university policies. But institutions of higher education (IHEs) should not rely on such detectors as dispositive and should allow students an opportunity during the hearing process to rebut any such detector's findings, including by attacking its reliability. Recent court cases illustrate the risks that IHEs face when they place too much emphasis on AI detectors or do not permit students to challenge detectors' reliability.
Background
Plagiarism checkers that served educators in the past to help detect cheating cannot detect use of generative artificial intelligence with the same type of accuracy. Traditional plagiarism detection tools can easily identify re-used and unattributed language by checking a student's paper against a bank of published material and prior student papers. But, because of how generative AI works, it is not included in that bank of prior documents, thereby preventing those detection tools from determining if GenAI was used. Instead, AI detection tools flag patterns, such as uniform sentence structure and specific syntax indicators, and then typically offer a percent score suggesting the likelihood that a paper (or specific passages within a paper) was prepared using generative AI—i.e., an 85% likelihood. Unlike the findings of a plagiarism detection tool, which can establish that someone plagiarized and from where, the findings of an AI detection tool are not concrete and cannot establish whether AI was used.
The company that operates one of the leading AI platforms launched an AI detection tool that it quickly withdrew from the market, concluding that it was not accurate. Such decisions should give IHEs pause before relying too heavily on the findings of other AI detection tools. Several case studies have also warned that AI-generated text can closely resemble genuine scientific writing and that AI detection software may inadvertently flag text drafted by non-native speakers of English, raising questions about AI detectors' reliability and potentially prejudicial outcomes.[1] Other studies have raised similar questions about the reliability of AI detectors.[2]
Given the prevalence of generative AI, frameworks for assessments in higher education are shifting away from technological advances. Sales of 'blue books' have increased significantly;[3] some professors are reportedly moving examinations and substantive writing assignments to the classroom to limit reliance on generative tools;[4] and some professors are opting to conduct oral examinations.[5]
Recent Litigation
Some students who have been suspended or terminated by their IHE based on allegations that they used AI on examinations or papers have resorted to the courts. In one recent case, a student saw success where the IHE used AI detection tools to help determine guilt.
In Newby v. Adelphi University, an undergraduate student sued his university after it accused him of using generative AI on an essay. The student’s professor used Turnitin, an AI detection service that produced an "Al-generated score of 100%," suggesting that the essay had "been produced by artificial intelligence."[6] Following a misconduct hearing, the student received an official violation under the University's Code of Academic Integrity. He unsuccessfully appealed the violation through Adelphi's internal appeals process. He then filed a lawsuit under Article 78 of New York's Civil Practice Law and Rules (CPLR) in the Supreme Court of New York, alleging that Adelphi's decisions were arbitrary and capricious and that he had been denied a "fair and impartial opportunity to be heard."[7] On January 28, 2026, at the motion to dismiss stage, the court denied Adelphi's motion to dismiss. The court concluded that Adelphi's finding that the student had plagiarized was without merit because Adelphi had failed to consider the student's evidence, thereby "thwarting" a "meaningful appeal."[8] The court emphasized that even though the basis of the student’s violation came from the university's Code of Academic Integrity, which involves allegations of plagiarism and use of AI, the student Code of Conduct requires that students be provided due process rights in the face of alleged misconduct.[9]
As a countervailing example, in Yang v. Neprash, the District Court of Minnesota sided with the University of Minnesota when a Ph.D. student sued several university officials after he was expelled for academic dishonesty involving his alleged use of artificial intelligence tools during a doctoral exam.[10] Unlike in Newby, where the professor used an AI-detection tool, in Yang the professor concluded that AI had been used because the student's answers were inconsistent with his earlier writing style and contained ideas that had not been covered in class.[11] The court dismissed the complaint, finding that the student had received detailed notice of the allegations, was represented by an advocate, had a full opportunity to present evidence, and had an opportunity for appellate review, all of which satisfied due process.[12]
Practical Guidance and Takeaways
Cases like Newby and Yang underscore the need for IHEs to update their policies to address generative AI and ensure that they are not over-relying on AI-detection tools or denying students due process.
Best practices include:
- Revising institutional policies to ensure:
- Institutions do not place undue emphasis on the findings of AI-detection tools;
- Institutions protect due-process safeguards, including allowing students to challenge the reliability of any AI-detection tools that the institution uses during the misconduct process; and
- Institutions clarify permitted versus prohibited uses of generative AI.
- Encouraging instructors to be thoughtful about student's use of GenAI, which may include:
- Clearly stating expectations and permitted uses of generative AI in course syllabi;
- Suggesting that instructors depart from the typical take-home exam or paper, and instead consider offering:
- Supervised assessments in class, using blue books, oral exams, or supervised writing;
- Multiple stages of drafts; or
- Experiential or other assignments that require personal engagement and cannot be as easily answered with generative AI.
- Treating AI detection tools as screening tools, rather than proof of cheating. If a detector flags an assignment, treat it as a lead to follow up on with the student.
Steptoe has extensive experience assessing university policies and providing guidance on artificial intelligence practices, as well as helping clients navigate misconduct situations and follow-on litigation. We remain ready to assist educational institutions to strengthen current practices and navigate the intersection of AI and education.
[1] Weixin Liang & Mert Yuksekgonul, et. al., GPT detectors are biased against non-native English writers, 4 Cell Press J. 7 (2023), https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7 [https://doi.org/10.48550/arXiv.2304.02819] (concluding that detectors aimed at differentiating between human-generated and AI generated content consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified).
[2] Michaela G. Murdock & Aditya Tadinada, Can AI Tools Reliably and Effectively Detect Plagiarism in Scientific Writing?, Cureus J. Medical Science 17(5) (2025), https://pmc.ncbi.nlm.nih.gov/articles/PMC12152223/ [https://doi.org/ 10.7759/cureus.83924] (concluding that AI-detection tools cannot reliably detect the use of AI in AI-rewritten text).
[3] Ben Cohen, They Were Every Student's Worst Nightmare. Now Blue Books Are Back, https://www.wsj.com/business/chatgpt-ai-cheating-college-blue-books-5e3014a6?gaa_at=eafs&gaa_n=AWEtsqfg96yHd6O5ltLB-5l2DAGueFGiFvzpwK8K5IU72NuZzUpQkPbCPkAG2qpqW_A%3D&gaa_ts=69068609&gaa_sig=62eQ3776ncDtUxDOLS45ClL7kAsmlGSq3O2wNgT-cCXh1uBq-giXIJRcixRy_Sl6tvuw4YrQ8oxYmHHtjwgo1g%3D%3D, May 23, 2025 (accessed Mar. 2, 2026).
[4] Johanna Alonso, The Handwriting Revolution, https://www.insidehighered.com/news/faculty-issues/curriculum/2025/06/17/amid-ai-plagiarism-more-professors-turn-handwritten-work, Jun. 17, 2025 (accessed Mar. 2, 2026).
[5] Joanna Slater, Professors are turning to this old-school method to stop AI use on exams, https://www.washingtonpost.com/education/2025/12/12/ai-artificial-intelligence-college-oral-exam/, Dec. 12, 2025 (accessed Mar. 2, 2026).
[6] Matter of Newby v. Adelphi Univ., No. 26021 (N.Y. Sup. Ct. Jan. 28, 2026).
[7] Id.
[8] Id.
[9] Id.
[10] Haishan Yang v. Neprash, 2025 LX 416753 (D. Minn. Oct. 31, 2025).
[11] Id.
[12] Id.