Home  >  Article  >  Technology peripherals  >  Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

WBOY
WBOYOriginal
2024-07-02 01:07:361120browse

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Author | Wang Haorui, Georgia Institute of Technology

Editor | ScienceAI

Molecular discovery as an optimization problem poses significant computational challenges because the optimization objective may not be differentiable. Evolutionary algorithms (EAs) are commonly used to optimize black-box targets in molecular discovery by traversing chemical space through random mutation and crossover, but this results in extensive and expensive target evaluation.

In this work, researchers from the Georgia Institute of Technology, the University of Toronto, and Cornell University collaborated to propose Molecular Language Enhanced Evolutionary Optimization (MOLLEO), which integrates pre-trained large language models (LLMs) with chemical knowledge into evolutionary optimization. In the algorithm, the molecular optimization capability of the evolutionary algorithm has been significantly improved.

The study, titled "Efficient Evolutionary Search Over Chemical Space with Large Language Models", was published on the preprint platform arXix on June 23.

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Paper link: https://arxiv.org/abs/2406.16976

The huge computational challenge of molecular discovery

Molecular discovery is a complex iterative process involving the design, synthesis, and Evaluation and improvement have a wide range of real-world applications, including drug design, materials design, improving energy, disease problems, etc. This process is often slow and laborious, and even approximate computational evaluations require significant resources due to complex design conditions and evaluation of molecular properties that often require expensive evaluations (such as wet experiments, bioassays, and computational simulations).

Therefore, developing efficient molecular search, prediction and generation algorithms has become a research hotspot in the field of chemistry to accelerate the discovery process. In particular, machine learning-driven methods have played an important role in rapidly identifying and proposing promising molecular candidates.

Due to the importance of the problem, molecular optimization has received great attention, including more than 20 molecular design algorithms that have been developed and tested (among them, combinatorial optimization methods such as genetic algorithms and reinforcement learning are ahead of other generative models and continuous optimization algorithms ), Please refer to the recent review article of the Naturesub-journal for details. One of the most effective methods is evolutionary algorithms (EAs). The characteristic of these algorithms is that they do not require gradient evaluation, so they are very suitable for black-box objective optimization in molecular discovery.

However, a major drawback of these algorithms is that they randomly generate candidate structures without exploiting task-specific information, resulting in the need for extensive objective function evaluation. Because evaluating attributes is expensive, molecular optimization not only finds the molecular structure with the best expected attributes, but also minimizes the number of evaluations of the objective function (which is also equivalent to improving search efficiency).

Recently, LLM has demonstrated some basic capabilities in multiple chemistry-related tasks, such as predicting molecular properties, retrieving optimal molecules, automating chemical experiments, and generating molecules with target properties. Since LLMs are trained on large-scale text corpora covering a wide range of tasks, they demonstrate general language understanding capabilities and basic chemical knowledge, making them an interesting tool for chemical discovery tasks.

However, many LLM-based methods rely on in-context learning and cue engineering, which can be problematic when designing molecules with strict numerical goals, as LLM may have difficulty meeting precise numerical constraints or optimizing specific numerical target. Furthermore, methods that rely solely on LLM hints may generate molecules with poor physical foundation or generate invalid SMILES strings that cannot be decoded into chemical structures.

Molecular Language Enhanced Evolutionary Optimization

In this study, we propose Molecular Language Enhanced Evolutionary Optimization (MOLLEO), which integrates LLM into EA to improve the quality of generated candidates and accelerate the optimization process. MOLLEO utilizes LLM as a genetic operator to generate new candidates through crossover or mutation. We demonstrate for the first time how LLM can be integrated into the EA framework for molecule generation.

In this study, we considered three language models with different capabilities: GPT-4, BioT5, and MoleculeSTM. We integrate each LLM into different crossover and mutation procedures and demonstrate our design choices through ablation studies.

We have proven the superior performance of MOLLEO through experiments on multiple black-box optimization tasks, including single-objective and multi-objective optimization. For all tasks, including the more challenging protein-ligand docking, MOLLEO outperforms baseline EA and 25 other strong baseline methods. Additionally, we demonstrate the ability of MOLLEO to further optimize on the best JNK3 inhibitor molecules in the ZINC 250K database.

Our MOLLEO framework is based on a simple evolutionary algorithm, the Graph-GA algorithm, and enhances its functionality by integrating chemically aware LLM in genetic operations.

We first outline the problem statement, emphasizing the need to minimize expensive objective evaluations in black-box optimization. MOLLEO utilizes LLMs such as GPT-4, BioT5, and MoleculeSTM to generate new candidate molecules guided by target descriptions.

Specifically, in the crossover step, instead of randomly combining two parent molecules, we use LLM to generate molecules that maximize the target fitness function. In the mutation step, the operator mutates the fittest member of the current population according to the target description. However, we noticed that LLM did not always generate candidates with higher fitness than the input molecules, so we constructed selection pressures to filter edited molecules based on structural similarity.

Experimental results

We evaluated MOLLEO on 18 tasks. Tasks are selected from PMO and TDC benchmarks and databases and can be divided into the following categories:

  1. Structure-based optimization: Optimize molecules according to the target structure, including isomers generation based on the target molecule formula (isomers_c9h10n2o2pf2cl) and Two tasks based on matching or avoiding scaffold and substructural motifs (deco_hop, scaffold_hop).
  2. Name-based optimization: Includes finding compounds similar to known drugs (mestranol_similarity, thiothixene_rediscovery) and three multi-attribute optimization tasks (MPO) that rediscover drugs while rediscovering them (e.g. Perindopril, Ranolazine, Sitagliptin) Optimize other properties such as hydrophobicity (LogP) and permeability (TPSA). Although these tasks primarily involved the rediscovery of existing drugs rather than the design of new molecules, they demonstrated LLM's fundamental chemical optimization capabilities.
  3. Property Optimization: Includes the simple property optimization task QED, which measures the drug similarity of molecules. We then focused on three tasks in the PMO, measuring the activity of molecules against the following proteins: DRD2 (dopamine receptor D2), GSK3β (glycogen synthase kinase-3β), and JNK3 (c-Jun N-terminal kinase-3) . Additionally, we include three protein-ligand docking tasks in TDC (structural drug design) that are closer to real-world drug design than simple physicochemical properties.

To evaluate our method, we follow the PMO benchmark method, taking into account the target value and computing budget, and report the area under the curve (AUC top-k) of the top k average attribute values ​​and the number of target function calls.

As a comparison benchmark, we used the top models in the PMO benchmark, including REINVENT based on reinforcement learning, the basic evolutionary algorithm Graph-GA and the Gaussian process Bayesian optimization GP BO.

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Illustration: Top-10 AUC of single-target tasks. (Source: paper)

We conducted single-objective optimization experiments on 12 tasks of PMO. The results are shown in the table above. We report the AUC top-10 score of each task and the overall ranking of each model. The results show that using any large language model (LLM) as a genetic operator can improve performance beyond the default Graph-GA and all other baseline models.

GPT-4 outperformed all models in 9 out of 12 tasks, demonstrating its effectiveness and prospects as a general large language model in molecule generation. BioT5 achieved the second best result among all test models, with a total score close to GPT-4, indicating that small models trained and fine-tuned on domain knowledge also have good application prospects in MOLLEO.

MOLSTM is a small model based on the CLIP model that is fine-tuned on the natural language description of the molecule and the chemical formula of the molecule. We use the gradient descent algorithm in the evolutionary algorithm on the same natural language description to generate different new molecules, and its performance is also outperforms other baseline methods.

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Illustration: JNK3 inhibits the population fitness that occurs as the number of iterations increases. (Source: paper)

To verify the effectiveness of integrating LLM into the EA framework, we show the score distribution of the initial random molecule pool on the JNK3 task. Subsequently, we performed a round of editing on all molecules in the pool and plotted the JNK3 score distribution of the edited molecules.

The results show that the distributions edited by LLM are all slightly shifted towards higher scores, indicating that LLM does provide useful modifications. However, the overall target score is still low, so single-step editing is not sufficient and iterative optimization using evolutionary algorithms is necessary here.

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Illustration: The average docking score of the top 10 molecules when docked with DRD3, EGFR or adenosine A2A receptor protein. (Source: paper)

In addition to the 12 single-objective optimization tasks in PMO, we also tested MOLLEO on more challenging protein-ligand docking tasks, which are closer to real-world molecule generation scenarios than single-objective tasks. The above figure is a plot of the average docking score of the top ten best molecules of MOLLEO and Graph-GA versus the number of target function calls.

The results show that in all three proteins, the docking scores of molecules generated by our method are almost always better than those of the baseline model and the convergence speed is faster. Among the three language models we used, BioT5 performed best. In reality, better docking scores and faster convergence rates can reduce the number of bioassays required to screen molecules, making the process more cost- and time-effective.

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Illustration: Sum and hypervolume fraction for multi-objective tasks. (Source: paper)

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Illustration: Pareto optimal visualization of Graph-GA and MOLLEO on multi-objective tasks. (Source: paper)

For multi-objective optimization, we consider two metrics: AUC top-10 of the sum of scores of all optimization objectives and the hypervolume of the Pareto optimal set. We present the results of multi-objective optimization on three tasks. Tasks 1 and 2 are inspired by drug discovery goals and aim to optimize three goals simultaneously: maximizing a molecule's QED, minimizing its synthetic accessibility (SA) score (meaning easier to synthesize), and maximizing its contribution to JNK3 (Task 1) or GSK3β (Task 2) binding scores. Task 3 is more challenging because it requires simultaneous optimization of five objectives: maximizing QED and JNK3 binding scores, and minimizing GSK3β binding scores, DRD2 binding scores, and SA scores.

We find that MOLLEO (GPT-4) consistently outperforms the baseline Graph-GA in both hypervolume and summation across all three tasks. In the figure, we visualize the Pareto optimal sets (in the objective space) of our method and Graph-GA in Task 1 and Task 2. The performance of open source language models decreases when multiple targets are introduced. We speculate that this performance degradation may stem from their inability to capture large amounts of information-dense context.

Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO

Illustration: Initializing MOLLEO using the best molecules in ZINC 250K. (Source: paper)

The ultimate goal of the evolutionary algorithm is to improve the properties of the initial molecule pool and discover new molecules. In order to explore the ability of MOLLEO to explore new molecules, we initialize the molecule pool with the best molecules in ZINC 250K, and then use MOLLEO and Graph-GA for optimization. Experimental results on the JNK3 task show that our algorithm consistently outperforms the baseline model Graph-GA and is able to improve on the best molecules found in existing datasets.

In addition, we also noticed that the training set of BioT5 is the ZINC20 database (containing 1.4 billion compounds), and the training set of MoleculeSTM is the PubChem database (about 250,000 molecules). We checked whether the final molecules generated by each model in the JNK3 task appeared in the corresponding dataset. It was found that the generated molecules did not overlap with those in the data set. This shows that the model is able to generate new molecules that were not present in the training set.

Can be applied to drug discovery, materials, biomolecule design

Molecular discovery and design is a rich field with numerous practical applications, many beyond the scope of the current study but still relevant to our proposed framework. MOLLEO combines LLM with EA algorithms to provide a flexible algorithm framework through pure text. In the future, MOLLEO can be applied to scenarios such as drug discovery, expensive computer simulations, and the design of materials or large biomolecules.

Future work We will further focus on how to improve the quality of generated molecules, including their target values ​​and discovery speed. As LLM continues to advance, we expect that the performance of the MOLLEO framework will also continue to improve, making it a promising tool in generative chemistry applications.

The above is the detailed content of Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn