Home >Technology peripherals >AI >Is it reliable to use ChatGPT to write papers? Some scholars tried it: full of loopholes, but a 'good' tool for water injection
With its powerful text creation capabilities, ChatGPT directly aspires to be the strongest question and answer model on the planet.
But powerful AI will also bring some negative impacts, such as seriously writing wrong answers in Q&A communities, helping students write papers, etc.
A recent paper on arXiv has attracted the attention of the industry. Researchers from the University of Santiago de Compostela in Spain cited "The Challenges of Artificial Intelligence in Drug Discovery" , Opportunities and Strategies", what is special about this paper is that the author uses ChatGPT to assist in paper writing.
Paper link: https://arxiv.org/abs/2212.08104
The author team is at The last paragraph of the abstract, "Note from human-authors," states that this paper was created to test whether the writing capabilities of ChatGPT (a chatbot based on the GPT-3.5 language model) can help human authors. Write an opinion piece.
The author designed an instruction as an initial prompt for text generation, and then evaluated the automatically generated content. After a thorough review, the human authors actually rewrote the manuscript in an effort to maintain a balance between the original proposal and scientific standards. The article concludes with a discussion of the advantages and limitations of using artificial intelligence to achieve this goal.
But there is another question, why is there no ChatGPT in the author list? (Manual dog head)
This article was generated with the assistance of ChatGPT, which was released on November 30, 2022 A natural language processing system trained by OpenAI using a large corpus of text, it is able to generate text that resembles human writing based on the input given to it.
For this article, the input provided by the human authors included the topic of the paper (Applications of Artificial Intelligence in Drug Discovery), the number of chapters to be considered, and specific tips for each chapter and instructions.
The text generated by ChatGPT needs to be manually edited before it can be finalized to correct and enrich the content and avoid duplication and inconsistencies; and humans also need to modify all references suggested by artificial intelligence.
The final version of this work is the result of iterative revisions by the human author, assisted by artificial intelligence. The total similarity between the preliminary text obtained directly from ChatGPT and the current version of the manuscript They are: exactly the same 4.3%, small changes 13.3%, relevant significance 16.3%. In the preliminary text obtained directly from ChatGPT, the proportion of correct references was only 6%.
The original version generated by ChatGPT, and the input information used to create that version are covered as Supporting Information
In the abstract of the paper Illustration generated by DALL-E.
The paper includes a total of 10 sections and 56 references. Sections 1-9 only contain 1-2 paragraphs, which mainly describe the topic of the paper. Content related to "Challenges, Opportunities and Strategies of Artificial Intelligence in Drug Discovery"; the tenth section mainly discusses "Expert opinions of human authors on scientific writing tools based on ChatGPT and AI"; only the abstract part of the article contains a picture illustration.
Abstract
Artificial intelligence has the potential to revolutionize the drug discovery process, providing greater efficiency, accuracy and speed. However, the successful application of AI depends on the availability of high-quality data, the handling of ethical issues, and an awareness of the limitations of AI-based methods.
This article reviews the benefits, challenges, and shortcomings of artificial intelligence in this field and proposes possible strategies and approaches to overcome current obstacles.
The article also discusses the use of data augmentation, explainable artificial intelligence, the integration of artificial intelligence with traditional experimental methods, and the potential advantages of artificial intelligence in medical research.
Overall, this review highlights the potential of artificial intelligence in drug discovery and provides an in-depth exploration of the challenges and opportunities to realize its potential in this field .
Expert opinions of human authors on scientific writing tools based on ChatGPT and AI
ChatGPT is a chatbot based on the GPT-3.5 language model , it was not designed to be an assistant for writing scientific papers, but its ability to engage in coherent conversations with humans and provide new information on a wide range of topics, as well as its ability to correct and even generate computational code, surprised the scientific community.
Therefore, we decided to test its potential and contribute to writing a short review on the role of artificial intelligence algorithms in drug discovery.
As an assistant for writing scientific papers, ChatGPT has several advantages, including the ability to quickly generate and optimize text, as well as helping users complete several tasks, including organizing information and even Situation connects ideas.
However, this tool is by no means ideal for generating new content.
After inputting the instructions, humans still need to modify the text generated by the artificial intelligence, and it is a large-scale editing and correction, including replacing almost all References because the references provided by ChatGPT are clearly incorrect.
This is also a big problem currently with ChatGPT. It has a key difference compared with other computing tools (such as search engines), which mainly provide the required information. A reliable reference.
There is another important problem with using artificial intelligence-based tools for writing assistance: it was trained in 2021, so it does not include the latest information.
The results provided by this writing experiment are: we can say that ChatGPT is not a useful tool for writing reliable scientific texts without strong human intervention.
ChatGPT lacks the knowledge and expertise required to accurately and adequately communicate complex scientific concepts and information.
Additionally, the language and style used by ChatGPT may not be suitable for academic writing, and in order to produce high-quality scientific text, human input and review are essential.
One of the main reasons why this kind of artificial intelligence cannot yet be used to produce scientific articles is that it lacks the ability to evaluate the authenticity and reliability of processed information. Therefore, scientific texts generated by ChatGPT Definitely contains false or misleading information.
Also note that reviewers may find it difficult to distinguish between articles written by humans or this artificial intelligence.
This makes the review process must be thorough to prevent the publication of false or misleading information.
There is a real risk that predatory journals may exploit the rapid production of scientific articles to produce large amounts of low-quality content. Journals are often driven by profit rather than commitment to scientific progress. They may use artificial intelligence to quickly produce articles, flooding the market with substandard research and undermining the credibility of the scientific community.
One of the greatest dangers is the potential proliferation of false information in scientific articles, which can lead to the devaluation of the scientific enterprise itself and a loss of trust in the accuracy and integrity of scientific research, which can adversely affect the progress of science.
There are several possible solutions to mitigate the risks associated with using artificial intelligence to produce scientific articles.
One solution is to develop artificial intelligence algorithms specifically designed to produce scientific articles. These algorithms can be trained on large datasets of high-quality, peer-reviewed research, which will help ensure the authenticity of the information they generate.
Additionally, these algorithms can be programmed to flag potentially problematic information, such as citing unreliable sources, which will alert researchers to the need for further review and verification.
Another approach is to develop artificial intelligence systems that are better able to assess the veracity and reliability of the information they process. This may involve training the AI on large datasets of high-quality scientific articles, as well as using techniques such as cross-validation and peer review to ensure the AI produces accurate and trustworthy results.
Another possible solution is to develop stricter guidelines and regulations for the use of artificial intelligence in scientific research, including requiring researchers to disclose the use of artificial intelligence in producing their articles. Intelligence, and implement review procedures to ensure that AI-generated content meets certain quality and accuracy standards.
Additionally, it could include requiring researchers to thoroughly review and verify the accuracy of any information generated by AI before publication, as well as penalties for those who fail to do so, education Public understanding of the limitations of AI and the potential dangers of relying on AI for scientific research may also be useful, helping to prevent the spread of misinformation and ensuring that the public is better able to distinguish between reliable and unreliable sources of scientific information.
Funding agencies and academic institutions can play a role in promoting the responsible use of artificial intelligence in scientific research by providing training and resources to help researchers understand the limitations of the technology.
Overall, addressing the risks associated with the use of artificial intelligence in the production of scientific articles will require a combination of technical solutions, regulatory frameworks and public education.
By implementing these measures, we can ensure that the use of artificial intelligence in the scientific community is responsible and effective. Researchers and policymakers must carefully consider the potential dangers of using artificial intelligence in scientific research and take steps to reduce these risks.
Until artificial intelligence can be trusted to produce reliable and accurate information, its use in the scientific community should be cautious, and the information provided by artificial intelligence tools must be carefully evaluated and verified using reliable sources.
Reference: https://arxiv.org/abs/2212.08104
The above is the detailed content of Is it reliable to use ChatGPT to write papers? Some scholars tried it: full of loopholes, but a 'good' tool for water injection. For more information, please follow other related articles on the PHP Chinese website!