Home >Technology peripherals >AI >In-depth understanding of charts: ChartLlama, open source chart behemoths such as Tencent and Nanyang Polytechnic
In the field of image understanding, multi-modal large models have fully demonstrated their excellent performance. However, there is still room for improvement in existing multimodal models for diagram understanding and generation tasks that are often handled in work.
Although the current state-of-the-art models in the field of graph understanding perform well on simple test sets, they are unable to handle more complex question and answer tasks due to their lack of language understanding and output capabilities. Task. On the other hand, the performance of multi-modal large models trained based on large language models is also unsatisfactory, mainly due to their lack of training samples for graphs. These problems have seriously restricted the continuous progress of multi-modal models in chart understanding and generation tasks
Recently, Tencent, Nanyang Technological University and Southeast University proposed ChartLlama. The research team created a high-quality graph dataset and trained a multi-modal large-scale language model focused on graph understanding and generation tasks. ChartLlama combines multiple functions such as language processing and chart generation to provide a powerful research tool for scientific researchers and related professionals.
Paper address: https://arxiv.org/abs/2311.16483
Home page address: https://tingxueronghua.github.io/ChartLlama/
The ChartLlama team designed a clever diversified data collection strategy, using GPT-4 to generate data with specific themes, distributions and trends of data to ensure the diversity of the data set. The team combined open source plotting libraries with the programming capabilities of GPT-4 to write precise charting code to produce accurate graphical data representations. In addition, the team also used GPT-4 to describe the chart content and generate question and answer pairs, and generated rich and diverse training samples for each chart to ensure that the trained model can fully understand the chart
In the field of chart understanding, traditional models can only complete some simple questions, such as reading numbers and other simple question and answer tasks, and cannot answer more complex questions. These models have difficulty following long instructions and often make errors in questions and answers involving mathematical operations. In contrast, ChartLlama can effectively avoid these problems. The specific comparison is as follows:
In addition to traditional tasks, the research team also defined several new tasks, including three tasks involving chart generation. The paper provides relevant examples:
Given a chart and instructions, examples of chart reconstruction and chart editing
The process of generating chart examples is based on instructions and raw data
ChartLlama performs well on various benchmark data sets, reaching the state-of-the-art level, And the amount of training data required is also smaller. It adopts a flexible data generation and collection method, greatly expands the chart types and task types in chart understanding and generation tasks, and promotes the development of the field
ChartLlama has designed a flexible data collection method that leverages the powerful language and programming capabilities of GPT-4 to create rich multi-modal chart datasets.
ChartLlama’s data collection consists of three main phases:
Using the above steps, ChartLlama has built a dataset containing multiple tasks and multiple chart types. The proportions of different types of tasks and charts in the total data set are as follows:
Please refer to the original paper for more detailed instructions and instructions
Whether it is a traditional task or a new task, ChartLlama has demonstrated the most superior performance. Traditional tasks include chart question-and-answer, chart summary, and structured data extraction of charts. Comparing ChartLlama with previous state-of-the-art models, the results are shown below:
The researchers also evaluated ChartLlama’s unique task capabilities, including generating Chart code, summarize chart and edit chart. They also created a test set for the corresponding task and compared it with LLaVA-1.5, the most powerful open source graphic and text model currently. Here are the results:
The research team tested ChartLlama’s question-answer accuracy on a variety of different types of charts and compared it with previous SOTA Model Unichart was compared with the proposed baseline model, and the results are as follows:
Overall, ChartLlama not only pushes the boundaries of multi-modal learning , and also provides a more accurate and efficient tool for chart understanding and generation. Whether in academic writing or corporate presentations, ChartLlama will make understanding and creating charts more intuitive and efficient, taking an important step forward in generating and interpreting complex visual data.
Interested readers can go to the original text of the paper to get more research content
The above is the detailed content of In-depth understanding of charts: ChartLlama, open source chart behemoths such as Tencent and Nanyang Polytechnic. For more information, please follow other related articles on the PHP Chinese website!