OpenAI shuts off its AI detector due to its poor accuracy

By: Micheal Wilson

OpenAI shuts off its AI detector due to its poor accuracy

July 25, 2023 6:48 AM

OpenAI has quietly discontinued its AI Classifier, which was designed to assist teachers, professors, and others in distinguishing between human- and AI-written content.


OpenAI, the artificial intelligence superpower, has quietly pulled the plug on its AI-detection software, alleging a poor rate of accuracy.


The OpenAI-developed AI classifier was originally released on January 31 to assist users such as instructors and academics in identifying between human-written content and AI-generated text.


However, according to the original blog post→ that announced the tool's introduction, the AI classifier has been decommissioned as of July 20:


"As of July 20, 2023, the AI classifier is no more available due to its low rate of accuracy."


The URL to the tool is no longer active, and the note provided only basic explanations for why the tool was shut down. However, the business stated that it was investigating new, more effective methods of detecting AI-generated content.


"We are focused on developing and deploying systems that enable consumers to determine if audio or visual material is AI-generated. We are trying to include feedback and are actively studying more efficient provenance techniques for text."

 

In action, OpenAI's former AI classifier. Source: OriginalityAI.


From the beginning, OpenAI made it known that the identification tool was subject to mistakes and shouldn't be regarded as "fully reliable."


According to the business, the AI detection tool's limitations include being "very inaccurate" at certifying text with less than 1,000 characters and being able to "confidently" designate material produced by people as AI-generated.


The classifier is the most recent of OpenAI's offerings to be scrutinized.


On July 18, Stanford and UC Berkeley researchers presented a study revealing that OpenAI's flagship product ChatGPT was becoming progressively worse with age.


Researchers discovered that ChatGPT-4's capacity to reliably identify prime numbers has dropped from 97.6% to 2.4% in just a few months. Furthermore, both ChatGPT-3.5 and ChatGPT-4 saw a significant decrease in their ability to generate new lines of code.