...
In July of 2023, OpenAI, the company behind ChatGPT, disabled public access to its AI classifier tool (Hendrik Kirchner et al. 2023). Their announcement of the change cited the classifier’s “low rate of accuracy,” even with text generated by their own ChatGPT service. While it was available, the classifier incorrectly identified human-written text as AI-generated – a “false positive” – in 9% of cases in OpenAI analysis. Independent testing found even higher rates (Elkhatat, Elsaid, and Almeer 2023).
Recent research from the University of Maryland (Saha & Feizi, 2025) finds even more concerning evidence about false positives in AI detection. Systematic testing of twelve leading AI detection tools revealed that mostdetectors failed to distinguish between different degrees of AI involvement, flagging minor edits at nearly the same rate as major rewrites. Some tools even showed the counterintuitive result of flagging lightly edited text more frequently than heavily AI-generated content.
In the context of academic integrity, the risks of false positives are significant (Klee 2023; Fowler 2023). Unreliable AI detection not only fails to improve academic integrity but may deepen existing inequalities. Non-native English speakers are flagged by AI detection tools at a disproportionate rate (Myers 2023). Other tools like Grammarly with legitimate academic applications, particularly for writers with dyslexia and other learning disabilities, also increase the likelihood of being flagged by AI detectors (Shapiro 2018; Steere 2023).
Further, recent research indicates that AI detectors exhibit bias against older or smaller LLM models, creating further inequity where students with differing levels of access to AI tools face vastly different risks of false accusations (Saha & Feizi, 2025).
Panel | ||
---|---|---|
| ||
For all of the reasons given above, university ITS does not currently license, support, or recommend any tool for AI writing detection. Barring a significant technological breakthrough on this front, these tools are simply not reliable enough to be incorporated into our university policies and procedures. |
...
Units across campus will continue to provide forums for faculty to discuss the implications of AI and approaches to take in response. With no reliable detection tools on the horizon, these conversations, both on campus and off, represent our best avenue to authentic assessment of our students and their work (McMurtrie 2023; “Authentic Assessment,” n.d.). Faculty should refer to the resources provided by the Center for Learning and Student Success for more information about the role of AI detection tools in submitting academic integrity cases.
Online Learning Services will continue to evaluate new teaching and learning technologies and remains available to consult with faculty on teaching and technology. ITS will continue to provide access to effective tools where they are available. In addition to technological considerations, the Center for Teaching and Learning Excellence has pedagogical and policy resources for instructors on strategies they might take to improve their assessments, and resources for (CTLE, n.d.).
AI Content Detection and Turnitin
...
At the end of the free preview on December 31, 2023, Turnitin announced that it would begin charging an additional license fee for the use of the AI Detection tool. Given the concerns about its effectiveness, ITS elected to not license the AI content detection tool. We are not alone in this choice as multiple R1 universities have made a similar decision (Brown 2023; Coley 2023; “Known Issue – Turnitin AI Writing Detection Unavailable – Center for Instructional Technology | The University of Alabama” 2023). As UC Berkeley's Center for Teaching and Learning explained to the Times, "overreliance on technology can damage a student-and-instructor relationship more than it can help it." Even institutions that continue using these tools acknowledge their limitations—the Times noted that the University of Houston-Downtown warns faculty that plagiarism detectors "are inconsistent and can easily be misused" (Holtermann, 2025).
We are also unable to recommend any alternative technological solution. None of the AI detection tools currently available online are accurate enough to provide credible evidence in academic integrity investigations. The risk of misleading results harming students who are acting in good faith is too great. ITS is committed to thorough and transparent vetting of any new tools that emerge in the future. If a reliable tool for AI detection becomes available, ITS will evaluate the tool and consider recommending it to the Syracuse University academic community.
...
Hendrik Kirchner, Jan, Lama Ahmad, Scott Aaronson, and Jan Leike. 2023. “New AI Classifier for Indicating AI-Written Text.” January 31, 2023. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text.
Holtermann, Callie. 2025. "How Students Are Fending Off Accusations That They Used A.I. to Cheat." The New York Times, May 17, 2025. https://www.nytimes.com/2025/05/17/style/ai-chatgpt-turnitin-students-cheating.html.
Klee, Miles. 2023. “She Was Falsely Accused of Cheating With AI -- And She Won’t Be the Last.” Rolling Stone (blog). June 6, 2023. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/.
...
Myers, Andrew. 2023. “AI-Detectors Biased Against Non-Native English Writers.” May 15, 2023. https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers.
Saha, Shoumik, and Feizi, Soheil. 2025. "Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing." arXiv preprint arXiv:2502.15666v2. https://doi.org/10.48550/arXiv.2502.15666
Shapiro, Lisa Wood. 2018. “How Technology Helped Me Cheat Dyslexia.” Wired, June 18, 2018. https://www.wired.com/story/end-of-dyslexia/.
...