Home >Technology peripherals >AI >OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: It's not as good as a paper plagiarism checking tool.

OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: It's not as good as a paper plagiarism checking tool.

PHPz
PHPzforward
2023-04-11 15:19:031278browse

Many people may have forgotten that ChatGPT was officially released at the end of November last year, only two months ago, but the craze it set off has caused technology companies to follow suit and spawned unicorn startups. , and also led the academic community to modify the requirements for paper acceptance.

After ChatGPT triggered a big discussion in the AI ​​field on "whether to ban it or not", OpenAI's authenticity identification tool is finally here.

On January 31, OpenAI officially announced the launch of a recognition tool that distinguishes human works from AI-generated text. This technology is designed to identify content generated by its own ChatGPT, GPT-3 and other models. . However, the accuracy of the classifier currently seems to be worrying: OpenAI pointed out in the blog that the AI ​​​​identification AI high-confidence accuracy rate is about 26%. But the agency believes that when used in conjunction with other methods, it can help prevent AI text generators from being misused.

"The purpose of our proposed classifier is to help reduce confusion caused by AI-generated text. However, it still has some limitations, so it should be used as an alternative to other methods of determining the source of text. as a supplement rather than as a primary decision-making tool," an OpenAI spokesperson told the media via email. "We are getting feedback on whether such tools are useful with this initial classifier, and hope to share ways to improve them in the future." Enthusiasm for text-generating AI in particular is growing, but it's been countered by concerns about misuse, with critics calling on the creators of these tools to take steps to mitigate their potentially harmful effects.

Faced with the massive amount of AI-generated content, some industries immediately imposed restrictions. Some of the largest school districts in the United States have banned the use of ChatGPT on their networks and devices, fearing that it will affect students' learning and The accuracy of the content generated by the tool. Websites including Stack Overflow have also banned users from sharing content generated by ChatGPT, saying that artificial intelligence will make users inundated with useless content in normal discussions.

#These situations highlight the need for AI recognition tools. Although the effect is not satisfactory, the OpenAI AI Text Classifier achieves architectural benchmarking with the GPT series. Like ChatGPT, it is a language model that is trained on many public text examples from the web. Unlike ChatGPT, it’s fine-tuned to predict the likelihood that a piece of text was generated by AI—not just from ChatGPT, but from any text-generating AI model. OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: Its not as good as a paper plagiarism checking tool.

Specifically, OpenAI trained its AI text classifier on text from 34 text generation systems across five different organizations, including OpenAI itself. These were paired with similar (but not identical) artificial text from Wikipedia, websites pulled from links shared on Reddit, and a set of "human demos" collected for the OpenAI text generation system.

It should be noted that the OpenAI text classifier is not suitable for all types of text. The content to be detected needs to be at least 1000 characters, or approximately 150 to 250 words. It doesn’t have the plagiarism-checking capabilities of paper detection platforms—a very uncomfortable limitation considering that text-generating AI has been shown to copy “correct answers” ​​from training sets. OpenAI said that because of its English-forward dataset, it was more likely to make errors on text written by children or in languages ​​other than English.

The detector does not give a positive yes or no answer when evaluating whether a given piece of text was generated by AI. Depending on its confidence level, it will mark the text as "very unlikely" to be generated by AI (less than 10% probability), "unlikely" to be generated by AI (between 10% and 45% probability), "Unclear whether it was" AI-generated (45% to 90% chance), "possibly" AI-generated (90% to 98% chance), or "very likely" AI-generated (more than 98% chance) .

Looks very similar to image recognition AI, except for the accuracy. According to OpenAI, classifiers incorrectly label text written by humans as text written by AI 9% of the time.

After some trials, the effect is indeed not good

OpenAI claims that the success rate of its AI text classifier is about 26%. After some netizens tried it, they found that the recognition effect was indeed not good.

After the well-known ML and AI researcher Sebastian Raschka tried it, he gave the evaluation of "It does not work". He used his Python ML book from the original 2015 edition as input text and the results are shown below.

  • Randy Olson’s foreword part was identified as Unclear whether it was generated by AI (unclear)
  • His own preface part was identified aspossibly generated by AI
  • The paragraph part of the first chapter was identified aslikely generated by AI

OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: Its not as good as a paper plagiarism checking tool.

Sebastian Raschka said that this is an interesting example, but he already feels bad for students who may be punished for outrageous paper recognition results in the future.

So he proposed that if you want to deploy such a model, please share a confusion matrix. Otherwise, if educators adopt this model for grading, it could cause real-world harm. There should also be some transparency around false positives and false negatives.

In addition, Sebastian Raschka input the content of the first page of Shakespeare's "Macbeth", and the OpenAI AI text classifier gave a result that was very likely to be generated by AI. Simply outrageous!

OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: Its not as good as a paper plagiarism checking tool.

Others uploaded content created by the AI ​​writing tool Easy-Peasy.AI, and the results were determined by the OpenAI AI text classifier The possibility of being generated by AI is very small.

OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: Its not as good as a paper plagiarism checking tool.

#Finally, someone used the method of repeated translation and let GPT3 rewrite the text, which also fooled the recognizer.

OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: Its not as good as a paper plagiarism checking tool.

To sum up, the forward recognition is inaccurate, the reverse recognition is wrong, and some techniques for revising the paper cannot be seen through. . It seems that, at least in the field of AI text content recognition, OpenAI still needs to work hard.

The above is the detailed content of OpenAI officially launched an AI-generated content identifier, but the success rate is only 26%. Netizens: It's not as good as a paper plagiarism checking tool.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete