Home > Article > Technology peripherals > Is ChatGPT going to kill the data annotation industry? 20 times cheaper than humans and more accurate
Unexpectedly, the first group of people eliminated after the evolution of AI are the people who help train AI.
Many NLP applications require manual annotation of large amounts of data for a variety of tasks, especially training classifiers or evaluating the performance of unsupervised models. Depending on the scale and complexity, these tasks may be performed by crowdsourced workers on platforms such as MTurk as well as trained annotators such as research assistants.
We know that language large models (LLM) can "emerge" after reaching a certain scale - that is, they can acquire new capabilities that were previously unforeseen. As a large model that promotes a new outbreak of AI, ChatGPT’s capabilities in many tasks have exceeded people’s expectations, including labeling data sets and training yourself.
Recently, researchers from the University of Zurich have demonstrated that ChatGPT outperforms crowdsourcing work platforms and human work on multiple annotation tasks, including relevance, stance, topic and frame detection. assistant.
Additionally, the researchers did the math: ChatGPT costs less than $0.003 per annotation — roughly 20 times cheaper than MTurk. These results show the potential of large language models to greatly improve the efficiency of text classification.
Paper link: https://arxiv.org/abs/2303.15056
Many NLP applications require high-quality annotated data, especially for training classifiers or evaluating the performance of unsupervised models. For example, researchers sometimes need to filter noisy social media data for relevance, assign texts to different topic or conceptual categories, or measure their emotional stance. Regardless of the specific method used for these tasks (supervised, semi-supervised, or unsupervised learning), accurately labeled data is required to build a training set or use it as a gold standard to evaluate performance.
The usual way people deal with this is to recruit research assistants or use crowdsourcing platforms like MTurk. When OpenAI built ChatGPT, it also subcontracted the problem of negative content to a Kenyan data annotation agency, and conducted a lot of annotation training before it was officially launched.
This report submitted by the University of Zurich in Switzerland explores the potential of large language models (LLM) in text annotation tasks, focusing on ChatGPT, released in November 2022. It proves that zero-shot (i.e. without any additional training) ChatGPT outperforms MTurk annotation on classification tasks at only a few tenths of the cost of manual labor.
The researchers used a sample of 2,382 tweets collected in a previous study. The tweets were labeled by trained annotators (research assistants) for five different tasks: relevance, stance, topic, and two frame detection. In the experiment, the researcher submitted the task to ChatGPT as a zero-shot classification and simultaneously to the crowdsourcing workers on MTurk, and then evaluated the performance of ChatGPT based on two benchmarks: relative to the accuracy of human workers on the crowdsourcing platform, and accuracy relative to research assistant annotators.
It was found that on four out of five tasks, ChatGPT had a higher zero-sample accuracy than MTurk. ChatGPT's encoder agreement exceeds that of MTurk and trained annotators for all tasks. Furthermore, in terms of cost, ChatGPT is much cheaper than MTurk: five classification tasks cost about $68 on ChatGPT (25264 annotations) and about $657 on MTurk (12632 annotations).
That puts ChatGPT’s cost per annotation at about $0.003, or one-third of a cent — about 20 times cheaper than MTurk, and with higher quality. Given this, it is now possible to annotate more samples or create large training sets for supervised learning. Based on existing tests, 100,000 annotations cost approximately $300.
The researchers say that while further research is needed to better understand how ChatGPT and other LLMs function in a broader context, these results suggest they have the potential to change the way researchers conduct The way data is annotated, and disrupts part of the business model of platforms like MTurk.
The researchers used a dataset of 2382 tweets that were manually annotated from previous studies on tasks related to content moderation. Specifically, trained annotators (research assistants) constructed gold standards for five conceptual categories with varying numbers of categories: relevance of tweets to content moderation questions (relevant/irrelevant); regarding Article 230 ( position as part of the U.S. Communications Decency Act of 1996), a key part of U.S. Internet legislation; topic identification (six categories); Group 1 frameworks (content moderation as problem, solution, or neutral); and Section 1 Two sets of frameworks (fourteen categories).
The researchers then performed these exact same classifications using ChatGPT and crowdsourced workers recruited on MTurk. Four sets of annotations were made for ChatGPT. To explore the impact of the ChatGPT temperature parameter that controls the degree of randomness in the output, it is annotated here with the default values of 1 and 0.2, which imply less randomness. For each temperature value, the researchers performed two sets of annotations to calculate ChatGPT's encoder agreement.
For the experts, the study found two political science graduate students annotating tweets for all five tasks. For each task, coders were given the same set of instructions and were asked to independently annotate tweets on a task-by-task basis. To calculate the accuracy of ChatGPT and MTurk, the comparison only considered tweets that both trained annotators agreed upon.
For MTurk, the goal of the research is to select the best group of workers, specifically through screening those who are classified by Amazon as "MTurk Masters", have more than 90% positive reviews, and work in the United States By.
This study uses the "gpt-3.5-turbo" version of the ChatGPT API to classify tweets. Annotation took place between March 9 and March 20, 2023. For each annotation task, the researchers intentionally avoided adding any ChatGPT-specific prompts such as “let’s think step by step” to ensure comparability between ChatGPT and MTurk crowdworkers.
After testing several variations, people decided to feed tweets to ChatGPT one by one with a prompt like this: "This is the tweet I selected, please mark it for [task-specific instructions (e.g., one of the topics in the instructions)]. Additionally, four ChatGPT responses were collected per tweet in this study, and a new chat session was also created for each tweet to ensure ChatGPT results Not affected by annotation history.
Figure 1. ChatGPT zero compared to high-scoring annotators on MTurk -shot's text annotation capabilities. ChatGPT has better accuracy than MTurk in four of the five tasks.
In the above figure, ChatGPT has the advantage Among the four tasks, ChatGPT has a slight advantage in one case (relevance), but its performance is very similar to MTurk. In the other three cases (frams I, frams II, and Stance), ChatGPT outperforms MTurk by 2.2 to 3.4 times. Furthermore, considering the difficulty of the task, the number of classes, and the fact that the annotations are zero-sample, the accuracy of ChatGPT is generally more than adequate.
For correlation, there are two For categories (relevant/irrelevant), ChatGPT has an accuracy of 72.8%, while for stance, there are three categories (positive/negative/neutral) with an accuracy of 78.7%. As the number of categories increases, the accuracy decreases, Although the inherent difficulty of the task also has an impact. Regarding the encoder protocol, Figure 1 shows that the performance of ChatGPT is very high, exceeding 95% for all tasks when the temperature parameter is set to 0.2. These values are higher than any human, including those trained on annotator. Even using the default temperature value of 1 (which means more randomness), the inter-coder agreement is always over 84%. The relationship between inter-coder agreement and accuracy is positive but weak ( Pearson correlation coefficient: 0.17). Although the correlation is based on only five data points, it suggests that lower temperature values may be more suitable for the annotation task, as it appears to improve the consistency of the results without significantly reducing accuracy.
It must be emphasized that testing ChatGPT is very difficult. Content moderation is a complex topic that requires significant resources. In addition to positions, researchers have developed concepts for specific research purposes categories. In addition, some tasks involve a large number of categories, yet ChatGPT still achieves high accuracy.
Using models to annotate data is nothing new. In computer science research using large-scale data sets, people often label a small number of samples and then amplify them with machine learning. However, after outperforming humans, we may be able to trust the judgments from ChatGPT more in the future.
The above is the detailed content of Is ChatGPT going to kill the data annotation industry? 20 times cheaper than humans and more accurate. For more information, please follow other related articles on the PHP Chinese website!