Home  >  Article  >  Technology peripherals  >  ChatGPT won’t reshape the healthcare industry, but it could help doctors save time

ChatGPT won’t reshape the healthcare industry, but it could help doctors save time

WBOY
WBOYforward
2023-04-11 19:04:011676browse

ChatGPT won’t reshape the healthcare industry, but it could help doctors save time

Every week, Eli Gelfand, chief of general cardiology at Beth Israel Deaconess Medical Center in Boston, wastes a lot of time writing letters he doesn’t want to write. The letters were written to the insurance company in response to objections to his treatment plan. The question might come from a new drug to treat heart failure, a diagnostic opinion on a CT scan of a patient with chest pain, or an experimental prescription that promises to save patients with stiff-heart syndrome. Gelfand, an associate professor at Harvard Medical School, lamented, “But it doesn’t matter if you don’t want to write. After all, these letters represent the hope of saving lives.”

Therefore, when OpenAI’s ChatGPT dominated the news with its smooth and continuous text generation capabilities When making headlines, Gelfand also saw a good opportunity to save time. He submitted some basic information about diagnosis and medication recommendations to the robot (but omitted the patient’s name) and asked it to write a letter of complaint with references.

ChatGPT has indeed sorted out usable letters, and this type of output can even be mass-produced. While some of the quotes may contain errors, Gelfand said the content of the letter "basically does not require major changes." More importantly, now it only takes him a minute to get the result, which is much faster than writing it himself.

Gelfand said he has written about 30 appeal letters using ChatGPT, most of which have been approved by insurance companies. But he never imagined that ChatGPT or its underlying AI model could quickly reshape the entire U.S. medical industry. "What it does is basically make my life a little easier and provide patients with the drugs they need faster. This is just a flexible solution that solves a problem that shouldn't exist in the first place."

This “problem that shouldn’t exist” is this: The United States spends more money on health care management than any other country in the world. In 2019, total U.S. health care spending reached $3.8 trillion, about a quarter of which was spent on administrative issues such as the health insurance complaints Gelfand complained about. It is estimated that approximately $265 billion of this amount is “pure waste”—unnecessary spending to prop up outdated technology in the U.S. health care system. Although Gelfand can generate a digital version of the complaint letter directly through the chatbot, it must deliver the content to the insurance company by fax. This also exposes the most realistic challenge: today’s most advanced AI background tools cannot directly solve the problems caused by the legacy systems of the 1960s.

Angry Cut "Legacy" Silk

Doximity is a social networking platform from San Francisco. Natee Gross, co-founder and chief strategy officer of the company, said that fax machines will not disappear in the short term. Two million doctors and other health care professionals in the United States still use this "old-school" technology, so Doximity launched the new process tool. The DocsGPT chatbot can help doctors write various letters and certificates, and can directly connect to network fax equipment.

Gross explained, “Our design philosophy is to make it as easy as possible for doctors to interact with emerging digital standards, while also being backward compatible with the various legacy tools actually used in the health care system.”

Doximity, affectionately called “the LinkedIn of doctors” by users, currently has a market value of $6.3 billion, with most of its revenue ($344 million in fiscal year 2022) coming from product promotions and recruitment displays for pharmaceutical companies and health systems. At the same time, it also provides doctors with a series of tools to achieve "cost reduction and efficiency improvement", that is, to alleviate the burden of administrative affairs. Gross said that the basic version of the product is completely free, but enterprise integration requires the purchase of paid programs.

Healthcare systems will continue to use fax machines, and, ironically, sharing data with them is less difficult than supporting incompatible software systems.

DocsGPT is based on ChatGPT but trained on healthcare data, such as anonymous insurance complaint letters. Doctors can use the tool to draft correspondence, including patient referrals, insurance appeals, thank-you notes to colleagues, post-operative instructions and even death certificates. It also compiles a library of curated tips based on historical searches by other doctors, and is quick to emphasize that it is not a medical expert. Before each response is generated, DocsGPT displays a disclaimer asking users to "please edit before sending to ensure content is accurate."

During an earnings call earlier this month, company co-founder and CEO Jeff Tangney was asked how Doximity planned to monetize DocsGPT. "Let me explain with a joke. So far, what we are most concerned about is how to distinguish product liability. It is not time to consider profits yet."

Robot to Robot

DocsGPT can indeed save doctors a lot of time, but since they can only contact insurance companies by fax and phone, it still takes several days to verify the patient's medical insurance. The scope of reimbursement, or obtaining authorization and approval for the surgery. Currently, doctors' offices or hospitals still need to arrange dedicated personnel to talk to insurance companies, and insurance companies also have dedicated personnel watching the screen at all times to manually check the medical insurance details of each patient.

Such a system not only places a huge burden on insurance companies and doctors, but also wastes a lot of time and manpower. Ankit Jain, co-founder and CEO of conversational AI startup Infinitus Systems, said, "The whole process is not only slow, but also very messy and cumbersome. One insurance company we contacted asked for 32 faxes, which were glued together one by one."

Since its founding in 2019, Infinitus has raised more than $50 million. Jain hopes to open up a new future where hospitals and insurance companies can truly exchange valuable information, rather than endlessly getting bogged down in medical insurance details and approval work.

Before founding Infinitus, Jain worked at Google and co-founded Google’s AI fund Gradient Ventures. The biggest problem, in his view, is that every doctor, every insurance company and every health system is recording information in different formats. Unlike long-standing medical industry workers, AI can quickly understand these formats. Infinitus builds its own model and does not rely on OpenAI technology. But Jain also admitted that the basic premise of the two is the same: "The role of the big language model is to master the ability to extract the correct connections between text messages and concepts after absorbing and digesting massive amounts of data."

to So far, this conversation has been one-sided: Infinitus built EVA Lightyear using large language models. This is a robot that has made more than 1 million calls to insurance companies on behalf of doctors to verify medical insurance details and advance authorization. Looking to the future, he hopes that EVA will no longer have to talk to humans on the other end of the phone, but will truly be robot-to-robot.

"Of course, what I want to say is not that robots still communicate in English, nor do they automatically exchange faxes. In the future, both parties will use APIs. This is the real digital highway. We only need to submit information, The other party can quickly review and approve quickly, allowing the hospital to receive an immediate response."

Still in the "crawling" stage

Although Jain is quite optimistic about the end-to-end automation transformation, in practice, In terms of application, chatbots and other types of AI-driven technologies still face huge obstacles: in the case of ChatGPT, if they want to maintain the ability to answer questions like a real person, they must be constantly retrained with the latest information.

Nigam Shah, chief data scientist at Stanford Health Care, mentioned, “When a doctor makes up facts, we call it ‘lying’. When the AI ​​model fabricates facts, we use a strange word to describe it, "hallucination." "

The training data used by ChatGPT is only up to 2021 at the latest, and has not been updated regularly since then. The medical field is constantly changing, and new guidelines, drugs and equipment are constantly on the market, so outdated data will inevitably bring problem. Shah said he doesn’t yet see the possibility of widespread adoption of generative AI in health care unless systems are in place in the future to regularly retrain models with new information while accurately detecting wrong answers. “We have to figure out a way Efficiently verify the authenticity and correctness of output content. "

Another risk is that doctors, whether with good intentions or not, will include protected health information in ChatGPT. Linda Malek, a partner at the law firm Moses Singer, believes that although both anonymization and encryption can be used to protect patients, Data protection methods, but these alone are still not enough. “Even if you try to desensitize the data stored in ChatGPT, it is entirely possible for AI to successfully restore the content of the information. ChatGPT has also become a special target for cybercrimes and may be used to implement various cyberattacks such as ransomware. "

Despite many potential risks, the brilliant achievements of generative AI still attract a large number of users. In a survey in January this year, researchers found that ChatGPT can already use Passed the U.S. medical licensing exam with “certain accuracy.” (In addition to ChatGPT, Google’s Flan-PaLM and China’s AI robot Xiaoyi also passed the national medical licensing exam.)

Bessemer Venture Partner Morgan Cheatham, a medical student at Brown University, believes that this ability to perform standardized tasks without special training on healthcare data sets has attracted much attention to ChatGPT. In Cheatham’s view, such results show that the ChatGPT large language model “ "It has certain value in health care applications". But he also admitted that any further application will require "from crawling to walking, and finally running".

For now, generative AI can do at least one thing: help doctors save time and energy and devote themselves to caring for patients. David Canes, a urologist at Beth Israel Deaconess Medical Center and co-founder of patient education startup Wellprept, said frankly, “The reason why I became a doctor is that I love the feeling of directly interacting with patients. But now, this wonderful feeling has Being torn apart by thousands of mouse clicks and keyboard strokes."

Canes said he plans to use ChatGPT for "low-risk communications" and also looks forward to handling those endless conversations more efficiently. Rigid regulations and red tape.

"If I could spend all my time diagnosing and treating patients, it would be a perfect experience for me. I love this feeling as much now as I did then. Looking at various technological improvements, I hope we We are on the boundary between the old and new eras, and we also expect technology to help us improve the worst aspects of the medical field."

The above is the detailed content of ChatGPT won’t reshape the healthcare industry, but it could help doctors save time. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete