...
Table of Contents | ||
---|---|---|
|
...
Where
...
We Stand —
...
Fall 2024
ITS and Online Learning Services are acutely aware that artificial intelligence tools like ChatGPT have been disruptive to the assessment methods used by many instructors. We have long been aware of the risk of ghostwriters and creative solutions for cheating on exams, but the ease with which these new AI tools can generate believable content has led to a sharp increase in questions about how to determine that the work submitted by students is original (Mills 2023). The detection of AI content is notoriously difficult (Edwards 2023; Heikkilä 2022).
...
In July of 2023, OpenAI, the company behind ChatGPT, disabled public access to its AI classifier tool (Hendrik Kirchner et al. 2023). Their announcement of the change cited the classifier’s “low rate of accuracy,” even with text generated by their own ChatGPT service. While it was available, the classifier incorrectly identified human-written text as AI-generated – a “false positive” – in 9% of cases in OpenAI analysis. Independent testing found even higher rates (Elkhatat, Elsaid, and Almeer 2023).
Recent research from the University of Maryland (Saha & Feizi, 2025) finds even more concerning evidence about false positives in AI detection. Systematic testing of twelve leading AI detection tools revealed that mostdetectors failed to distinguish between different degrees of AI involvement, flagging minor edits at nearly the same rate as major rewrites. Some tools even showed the counterintuitive result of flagging lightly edited text more frequently than heavily AI-generated content.
In the context of academic integrity, the risks of false positives are significant (Klee 2023; Fowler 2023). Unreliable AI detection not only fails to improve academic integrity but may deepen existing inequalities. Non-native English speakers are flagged by AI detection tools at a disproportionate rate (Myers 2023). Other tools like Grammarly with legitimate academic applications, particularly for writers with dyslexia and other learning disabilities, also increase the likelihood of being flagged by AI detectors (Shapiro 2018; Steere 2023).
Further, recent research indicates that AI detectors exhibit bias against older or smaller LLM models, creating further inequity where students with differing levels of access to AI tools face vastly different risks of false accusations (Saha & Feizi, 2025).
Panel | ||
---|---|---|
| ||
For all of the reasons given above, university ITS does not currently license, support, or recommend any tool for AI writing detection. Barring a significant technological breakthrough on this front, these tools are simply not reliable enough to be incorporated into our university policies and procedures. |
What to
...
Do?
All this leaves instructors in a challenging position where the best recommendations being put forward are to redesign their assessments. Redesigning assessments is difficult and time consuming, and the new assessment methods often require more time to grade. Just as AI tools are beginning to make the process of writing faster and easier for everybody, it feels unfair that teachers of writing are forced to spend more of their own precious time on addressing the downsides and potential misuse of these tools.
This change in the digital writing landscape has been foisted upon us suddenly and leaves us all scrambling to respond. Even so, these tools are available to learners and there is no way to prevent students from using them — the chat is already out of the bag, so to speak. Any response will consume our time and energy, so it is important our efforts are spent in ways that will genuinely address the problem. The instinct to just do something and respond to new problems by seeking out even newer tools is understandable, but flawed. Time spent chasing false positives created by inadequate and biased tools is time wasted and puts at risk our relationship with our students. Our time is better spent adapting our teaching and assessments to reflect the changing landscape of writing technology.
...
Units across campus will continue to provide forums for faculty to discuss the implications of AI and approaches to take in response. With no reliable detection tools on the horizon, these conversations, both on campus and off, represent our best avenue to authentic assessment of our students and their work (McMurtrie 2023; “Authentic Assessment,” n.d.). Faculty should refer to the resources provided by the Center for Learning and Student Success for more information about the role of AI detection tools in submitting academic integrity cases.
Online Learning Services will continue to evaluate new teaching and learning technologies and remains available to consult with faculty on teaching and technology. ITS will continue to provide access to effective tools where they are available. In addition to technological considerations, the Center for Teaching and Learning Excellence has pedagogical and policy resources for instructors on strategies they might take to improve their assessments, and resources for (CTLE, n.d.).
AI
...
Content Detection and Turnitin
In April 2023, Turnitin released an AI writing detector. This tool was enabled in the Syracuse University Turnitin system as a preview. During the preview there were no fees associated with the tool. Turnitin initially reported low rates of false positives, but those have since been called into question. (Chechitelli 2023; D’Agostino 2023). The detector’s false negative rate was close to 40-50% in tests where AI-generated text was reworded by a human or by a separate AI paraphrasing tool (Weber-Wulff et al. 2023).
At the end of the free preview on December 31, 2023, Turnitin announced that it would begin charging an additional license fee for the use of the AI Detection tool. Given the concerns about its effectiveness, ITS elected to not license the AI content detection tool. We are not alone in this choice as multiple R1 universities have made a similar decision (Brown 2023; Coley 2023; “Known Issue – Turnitin AI Writing Detection Unavailable – Center for Instructional Technology | The University of Alabama” 2023). As UC Berkeley's Center for Teaching and Learning explained to the Times, "overreliance on technology can damage a student-and-instructor relationship more than it can help it." Even institutions that continue using these tools acknowledge their limitations—the Times noted that the University of Houston-Downtown warns faculty that plagiarism detectors "are inconsistent and can easily be misused" (Holtermann, 2025).
We are also unable to recommend any alternative technological solution. None of the AI detection tools currently available online are accurate enough to provide credible evidence in academic integrity investigations. The risk of misleading results harming students who are acting in good faith is too great. ITS is committed to thorough and transparent vetting of any new tools that emerge in the future. If a reliable tool for AI detection becomes available, ITS will evaluate the tool and consider recommending it to the Syracuse University academic community.
...
Other AI Policy and Planning Resources from Syracuse University
Center for Teaching and Learning Excellence (CTLE)
Center for Learning And Student Success (CLASS)
Syracuse University Libraries Artificial Intelligence Research Guide
...
Bibliography
“Authentic Assessment.” n.d. Center for Innovative Teaching and Learning. Accessed February 27, 2024. https://citl.indiana.edu/teaching-resources/assessing-student-learning/authentic-assessment/index.html.
...
Hendrik Kirchner, Jan, Lama Ahmad, Scott Aaronson, and Jan Leike. 2023. “New AI Classifier for Indicating AI-Written Text.” January 31, 2023. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text.
Holtermann, Callie. 2025. "How Students Are Fending Off Accusations That They Used A.I. to Cheat." The New York Times, May 17, 2025. https://www.nytimes.com/2025/05/17/style/ai-chatgpt-turnitin-students-cheating.html.
Klee, Miles. 2023. “She Was Falsely Accused of Cheating With AI -- And She Won’t Be the Last.” Rolling Stone (blog). June 6, 2023. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/.
...
Myers, Andrew. 2023. “AI-Detectors Biased Against Non-Native English Writers.” May 15, 2023. https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers.
Saha, Shoumik, and Feizi, Soheil. 2025. "Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing." arXiv preprint arXiv:2502.15666v2. https://doi.org/10.48550/arXiv.2502.15666
Shapiro, Lisa Wood. 2018. “How Technology Helped Me Cheat Dyslexia.” Wired, June 18, 2018. https://www.wired.com/story/end-of-dyslexia/.
...