Home  >  Article  >  Technology peripherals  >  When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

PHPz
PHPzforward
2023-10-06 14:37:061528browse

Is

GPT-4 capable of doing paper review?

Researchers from Stanford and other universities really tested it.

They threw thousands of articles from Nature, ICLR and other top conferences to GPT-4, let it generate review comments (including modification suggestions and so on) , and then combined them with the opinions given by humans Compare.

After investigation, we found that:

More than 50% of the opinions proposed by GPT-4 are consistent with at least one human reviewer;

And more than 82.4% of the authors found the opinions provided by GPT-4 very helpful

What enlightenments can this research bring us? ?

The conclusion is:

There is still no substitute for high-quality human feedback; but GPT-4 can help authors improve first drafts before formal peer review.

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

Let’s look at it specifically.

Actual test of GPT-4 paper review level

In order to prove the potential of GPT-4, the researchers first used GPT-4 to create an

automatic pipeline.

It can analyze the entire paper in PDF format, extract titles, abstracts, figures, table titles and other content to create prompts

and then let GPT-4 provide review comments.

Among them, the opinions are the same as the standards of each top conference, and include four parts:

The importance and novelty of the research, as well as the reasons for possible acceptance or rejection and suggestions for improvement

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

The specific experiment was carried out from

two aspects.

First comes the quantitative experiment:

Read existing papers, generate feedback, and conduct systematic comparisons with real human opinions to identify overlaps Part

Here, the team selected 3096 articles from the main issue of Nature and major sub-journals, and 1709 articles from the ICLR Machine Learning Conference

(including last year and this year), totaling 4805 articles.

Among them, Nature papers involved a total of 8,745 human review comments; ICLR meetings involved 6,506 comments.

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

After GPT-4 gives its opinion, the pipeline extracts human and GPT-4 arguments respectively in the match link, and then performs semantic text matching to find overlapping arguments. This is used to measure the validity and reliability of GPT-4 opinions.

The results are:

1. GPT-4 opinions significantly overlap with the real opinions of human reviewers

Overall, in the Nature paper, GPT-4 has 57.55% opinions consistent with at least one human reviewer; in ICLR, this number is as high as 77.18%.

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

After further carefully comparing GPT-4 with the opinions of each reviewer, the team discovered that:

GPT-4 is the same as the one in the Nature paper The overlap rate dropped to 30.85% with human reviewers and 39.23% on ICLR.

However, this is comparable to the overlap rate between two human reviewers

In Nature papers, the average overlap rate for humans is 28.58%; on ICLR it is 35.25%

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

In addition, they also analyzed the grade level of the paper

(oral, spotlight, or directly rejected)found that:

For weaker papers, the overlap rate between GPT-4 and human reviewers is expected to increase. From the current more than 30%, it can be increased to close to 50%

This shows that GPT-4 has a high discriminating ability and can identify papers with poor quality

The author also said that , those papers that require more substantial modifications before they can be accepted are in luck. You can try the modification opinions given by GPT-4 before officially submitting them.

2. GPT-4 can provide non-universal feedback

The so-called non-universal feedback means that GPT-4 will not give a universal feedback that is applicable to multiple papers. review comments.

Here, the authors measured a "pairwise overlap rate" metric and found that it was significantly reduced to 0.43% and 3.91% on both Nature and ICLR.

This shows that GPT-4 has a specific goal

3, and can reach agreement with human opinions on major and universal issues

Generally speaking, those comments that appear earliest and are mentioned by multiple reviewers often represent important and common problems

Here, the team also found that LLM is more likely to identify multiple Common problems or defects unanimously recognized by the reviewers

The overall performance of GPT-4 is acceptable

4. The opinions given by GPT-4 emphasize some aspects that are different from humans

The study found that GPT-4 was 7.27 times more likely than humans to comment on the meaning of the research itself, and 10.69 times more likely to comment on the novelty of the research.

Both GPT-4 and humans often recommend additional experiments, but humans focus more on ablation experiments, and GPT-4 recommends trying them on more data sets.

The authors stated that these findings indicate that GPT-4 and human reviewers place different emphasis on various aspects, and that cooperation between the two may bring potential advantages.

Beyond quantitative experiments is user research.

A total of 308 researchers in the fields of AI and computational biology from different institutions participated in this study. They uploaded their papers to GPT-4 for review

The research team collected their opinions on Real feedback from GPT-4 reviewers.

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

Overall, More than half (57.4%) of participants found the feedback generated by GPT-4 helpful , including giving some points that humans can’t think of.

and 82.4% of those surveyed found it more beneficial than at least some human reviewer feedback.

In addition, more than half of the people (50.5%) expressed their willingness to further use large models such as GPT-4 to improve the paper.

One of them said that it only takes 5 minutes for GPT-4 to give the results. This feedback is really fast and is very helpful for researchers to improve their papers.

Of course, the author emphasizes:

The capabilities of GPT-4 also have some limitations

The most obvious one is that it focuses more on the "overall layout", Missing In-depth advice on specific technology areas (e.g. model architecture).

Therefore, as the author’s final conclusion states:

High-quality feedback from human reviewers is very important before formal review, but we can test the waters first to make up for the experiment and construction details may be missed

Of course, they also remind:

In the formal review, the reviewer should still participate independently and not rely on any LLM.

One author is all Chinese

This studyThere are three authors, all of whom are Chinese, and all come from the School of Computer Science at Stanford University.

When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers

They are:

  • Liang Weixin, a doctoral student at the school and also the Stanford AI Laboratory(SAIL)member. He holds a master's degree in electrical engineering from Stanford University and a bachelor's degree in computer science from Zhejiang University.
  • Yuhui Zhang, also a doctoral student, researches on multi-modal AI systems. Graduated from Tsinghua University with a bachelor's degree and from Stanford with a master's degree.
  • Cao Hancheng is a fifth-year doctoral candidate at the school, majoring in management science and engineering. He has also joined the NLP and HCI groups at Stanford University. Previously graduated from the Department of Electronic Engineering of Tsinghua University with a bachelor's degree.

Paper link: https://arxiv.org/abs/2310.01783

The above is the detailed content of When submitting your paper to Nature, ask about GPT-4 first! Stanford actually tested 5,000 articles, and half of the opinions were the same as those of human reviewers. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete