Home  >  Article  >  Technology peripherals  >  Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

WBOY
WBOYOriginal
2024-05-30 10:13:191134browse

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Editor | Cabbage Leaf

Many clinical tasks require understanding of professional data, such as medical images, genomics, etc. This kind of professional knowledge information usually does not exist in the training of general multi-modal large models...

In the description of the previous paper, Med-Gemini surpassed in various medical imaging tasks GPT-4 series models achieve SOTA!

Here, Google DeepMind has written a second paper on Med-Gemini.

Building on Gemini’s multimodal model, the team developed multiple models for the Med-Gemini series. These models inherit the core functionality of Gemini and are optimized for medical use with fine-tuning of 2D and 3D radiology, histopathology, ophthalmology, dermatology and genomics data.

1. Med-Gemini-2D: capable of processing radiology, pathology, dermatology, and ophthalmology images;
2. Med-Gemini-3D: capable of processing CT images;
3. Med-Gemini-Polygenic: Able to process genome "images".

The study was titled "Advancing Multimodal Medical Capabilities of Gemini" and was published on the arXiv preprint platform on May 6, 2024.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Medical data sources include medical data from different sources such as biobanks, electronic health records, medical imaging, wearable devices, biosensors, and genome sequencing. These data are driving the development of multimodal AI solutions to better capture the complexity of population health and disease.

AI in medicine has primarily focused on narrow tasks with single input and output types, but recent advances in generative AI show promise in solving multimodal, multitask challenges in medical settings. .

Multimodal generative artificial intelligence, represented by powerful models such as Gemini, has great potential to revolutionize healthcare. While medicine is a source of data for rapid iteration of these new models, general models often perform poorly when applied in the medical domain due to their highly specialized data.

Based on the core functions of Gemini, DeepMind has launched three new models of the Med-Gemini series, Med-Gemini-2D, Med-Gemini-3D, and Med-Gemini-Polygenic.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade
Illustration: Med-Gemini Overview. (Source: paper)

More than 7 million data samples from 3.7 million medical images and cases were used to train the model. Various visual question answering and image captioning datasets are used, including some private datasets from hospitals.

To process 3D data (CT), the Gemini video encoder is used, where the temporal dimension is treated as the depth dimension. To process genomic data, risk scores for various traits were encoded as RGB pixels in the image.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Example of predicting coronary artery disease using an individual's PRS image and demographic information. (Source: Paper)

Med-Gemini-2D

Med-Gemini-2D AI-based chest X-ray (CXR) report generation based on expert assessment A new standard was developed that surpassed the best results from two previous independent data sets, with absolute advantages of 1% and 12%, with AI reporting 57% and 96% of normal cases and 43% and 65% of abnormal cases, The quality was "comparable" or even "better" than the original radiologist's report.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Graphic: Med-Gemini-2D performance on the chest X-ray classification task. (Source: Paper)

Med-Gemini-2D outperforms the general larger Gemini 1.0 Ultra model on the task of distributed chest X-ray classification (seen on examples from the same dataset during training). For tasks outside the distribution, performance varies.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Med-Gemini-2D histopathology image classification performance. (Source: paper)

Med-Gemini mostly outperformed Gemini Ultra on histopathology classification tasks, but failed to outperform the pathology-specific base model.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Performance of PAD-UFES-20 classification task. (Source: paper)

On skin lesion classification, a similar trend is observed (domain-specific model > Med-Gemini > Gemini Ultra), although Med-Gemini is very close to the domain-specific model.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Performance comparison of Med-Gemini-2D, Gemini Ultra, and a supervised model trained using additional data for fundus image classification. (Source: paper)

For the ophthalmology classification, a similar situation is seen again. Note that domain-specific models are trained on ~200x more data, so Med-Gemini performs quite well in comparison.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Assessment details for VOA tasks. (Source: paper)

The team also evaluated the Med-Gemini-2D model in medical visual question answering (VQA). Here, their model is very powerful on many VQA tasks, often beating SOTA models. Med-Gemini-2D performed well on CXR classification and radiology VQA, exceeding SOTA or baseline on 17 out of 20 tasks.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Assessment details the generation of a chest X-ray report. (Source: Paper)

Beyond a simple narrow interpretation of medical images, the authors evaluated the performance of Med-Gemini-2D in chest X-ray radiology report generation and observed that it Expert evaluation achieves SOTA!

Med-Gemini-3D

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Manual evaluation results generated by head CT volume report. (Source: Paper)

Med-Gemini-3D is not just for 2D images but also for automated end-to-end CT report generation. According to expert assessment, 53% of these AI reports were deemed clinically acceptable, and although additional research is needed to meet the quality of reports from expert radiologists, this is the first generative model capable of this task.

Med-Gemini-Polygenic

Finally, Med-Gemini-Polygenic’s prediction of health outcomes was evaluated based on polygenic risk scores for various traits. The model generally outperforms existing baselines.

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Health outcome predictions using Med-Gemini-Polygenic compared to two baselines for maldistributed and out-of-distribution outcomes. (Source: Paper)

Here are some examples of multimodal conversations supported by Med-Gemini!

Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade

Illustration: Example of a 2D medical image conversation via open question and answer. (Source: paper)

In histopathology, ophthalmology, and dermatology image classification, Med-Gemini-2D surpassed the baseline in 18 of 20 tasks and approached task-specific model performance.

Conclusion

Overall, this work has made useful progress on a general multi-modal medical artificial intelligence model, but there is clearly still much room for improvement space. Many domain-specific models outperform Med-Gemini, but Med-Gemini is able to perform well with less data and more general methods. Interestingly, Med-Gemini appears to perform better on tasks that rely more on language understanding, such as VQA or radiology report generation.

The researchers envision a future in which all of these individual functions are integrated into comprehensive systems to perform a range of complex multidisciplinary clinical tasks. AI works alongside humans to maximize clinical efficacy and improve patient outcomes.

Paper link: https://arxiv.org/abs/2405.03162

Related content: https://twitter.com/iScienceLuvr/status/ 1789216212704018469

The above is the detailed content of Multimodal AI is the future of medicine. Google launches three new models, and Med-Gemini welcomes a major upgrade. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn