Home  >  Article  >  Technology peripherals  >  The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

王林
王林Original
2024-08-11 16:03:00897browse

It’s 2024, is there anyone who still doesn’t understand how Transformer works? Come and try this interactive tool.


In 2017, Google proposed Transformer in the paper "Attention is all you need", which became a major breakthrough in the field of deep learning. The number of citations of this paper has reached nearly 130,000. All subsequent models of the GPT family are also based on the Transformer architecture, which shows its wide influence.

As a neural network architecture, Transformer is widely popular in a variety of tasks from text to vision, especially in the currently hot field of AI chatbots.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

However, for many non-professionals, the inner workings of Transformer are still opaque, hindering their understanding and participation. Therefore, it is particularly necessary to demystify this architecture. But many blogs, video tutorials, and 3D visualizations tend to emphasize mathematical complexity and model implementation, which can be confusing for beginners. Visualization efforts also designed for AI practitioners focus on neuronal and hierarchical interpretability and are challenging for non-experts.

Therefore, several researchers from Georgia Institute of Technology and IBM Research developed A web-based open source interactive visualization tool "Transformer Explainer" to help non-professionals understand the high-level model structure and low-level mathematics of Transformer Operation . As shown in Figure 1 below.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

Transformer Explainer explains the inner workings of Transformer through text generation, using a Sankey diagram visualization design inspired by recent work on Transformer as a dynamic system, emphasizing how input data flows through model components. From the results, the Sankey diagram effectively illustrates how information is passed through the model and shows how the input is processed and transformed through Transformer operations.

In terms of content, Transformer Explainer tightly integrates a model overview that summarizes the Transformer structure and allows users to smoothly transition between multiple levels of abstraction to visualize the interaction between low-level mathematical operations and high-level model structure , to help them fully understand the complex concepts in Transformer.

Functionally, Transformer Explainer not only provides web-based implementation, but also has the function of real-time reasoning. Unlike many existing tools that require custom software installation or lack inference capabilities, it integrates a live GPT-2 model that runs natively in the browser using a modern front-end framework. Users can interactively experiment with their input text and observe in real time how Transformer's internal components and parameters work together to predict the next token.

Transformer Explainer expands access to modern generative AI technologies without requiring advanced computing resources, installation or programming skills. GPT-2 was chosen because the model is well-known, has fast inference speed, and is architecturally similar to more advanced models such as GPT-3 and GPT-4.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

  • Paper address: https://arxiv.org/pdf/2408.04619
  • GitHub address: http://poloclub.github.io/transformer-explainer/
  • Online experience address: https:// t.co/jyBlJTMa7m
The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning
Since it supports your own input, this site also tried "what a beautiful day" and the results are shown in the figure below.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

For Transformer Explainer, many netizens have given high praise. Some people say this is a very cool interactive tool.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

Some people say that they have been waiting for an intuitive tool to explain self-attention and positional encoding, which is Transformer Explainer. It will be a game-changing tool.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

Someone also made a Chinese translation.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

                                                                                                                                                                             Display address: http://llm-viz-cn.iiiai.com/llm

I can’t help but think of Karpathy, another great person in the popular science world, who wrote before talked a lot about complex Tutorials on current GPT-2, including "Pure C language hand-made GPT-2, the new project of former OpenAI and Tesla executives is popular ", "Karpathy's latest four-hour video tutorial: Reproduce GPT-2 from scratch , run it overnight and it will be done" etc. Now that there is a visualization tool for Transformer's internal principles, it seems that the learning effect will be better when the two are used together.

Transformer Explainer system design and implementation

Transformer Explainer visually shows how the Transformer-based GPT-2 model is trained to process text input and predict the next token. The front-end uses Svelte and D3 to implement interactive visualization, and the back-end uses ONNX runtime and HuggingFace's Transformers library to run the GPT-2 model in the browser.

In the process of designing Transformer Explainer, a major challenge was how to manage the complexity of the underlying architecture, because showing all the details at the same time would distract from the point. To solve this problem, researchers paid great attention to two key design principles.

First, researchers reduce complexity through multi-level abstraction. They structure their tools to present information at different levels of abstraction. This avoids information overload by enabling users to start with a high-level overview and work their way down to details as needed. At the highest level, the tool shows the complete processing flow: from receiving user-supplied text as input (Figure 1A), embedding it, processing it through multiple Transformer blocks, and using the processed data to predict the most likely next A token prediction is sorted.

Intermediate operations, such as the calculation of the attention matrix (Figure 1C), which are collapsed by default to visually display the importance of the calculation results, the user can choose to expand and view its derivation process through an animation sequence . The researchers adopted a consistent visual language, such as stacking attention heads and collapsing repeated Transformer blocks, to help users identify repeating patterns in the architecture while maintaining an end-to-end flow of data.

Secondly, researchers enhance understanding and participation through interactivity. The temperature parameter is crucial in controlling the output probability distribution of the Transformer, which affects the certainty (at low temperatures) or randomness (at high temperatures) of the next token prediction. But existing educational resources on Transformers tend to ignore this aspect. Users are now able to use this new tool to adjust temperature parameters in real time (Figure 1B) and visualize their critical role in controlling prediction certainty (Figure 2).

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

또한 사용자는 제공된 예제 중에서 선택하거나 자신의 텍스트를 입력할 수 있습니다(그림 1A). 사용자 정의 입력 텍스트를 지원하면 사용자가 다양한 조건에서 모델의 동작을 분석하고 다양한 텍스트 입력을 기반으로 자체 가정을 대화형으로 테스트함으로써 사용자의 참여 감각이 향상됩니다.

그럼 실제 적용 시나리오는 무엇인가요?

루소 교수는 생성 AI의 최근 발전을 강조하기 위해 자연어 처리 과정의 과정 콘텐츠를 현대화하고 있습니다. 그녀는 일부 학생들이 Transformer 기반 모델을 파악하기 어려운 "마법"으로 보는 반면, 다른 학생들은 모델이 어떻게 작동하는지 이해하고 싶지만 어디서부터 시작해야 할지 확신하지 못한다는 사실을 알아냈습니다.

이 문제를 해결하기 위해 그녀는 학생들에게 Transformer에 대한 대화형 개요(그림 1)를 제공하고 학생들이 적극적으로 실험하고 학습하도록 장려하는 Transformer explainer를 사용하도록 안내했습니다. 그녀의 수업에는 300명이 넘는 학생이 있으며, 소프트웨어나 특수 하드웨어를 설치할 필요 없이 학생의 브라우저 내에서 완전히 실행할 수 있는 Transformer explainer의 능력은 중요한 이점이며 소프트웨어나 하드웨어 설정 관리에 대한 학생들의 걱정을 없애줍니다.

이 도구는 학생들에게 애니메이션 및 대화형 가역적 추상화를 통한 주의 계산과 같은 복잡한 수학적 연산을 소개합니다(그림 1C). 이 접근 방식은 학생들이 운영에 대한 높은 수준의 이해와 이러한 결과를 생성하는 기본 세부 사항에 대한 깊은 이해를 얻는 데 도움이 됩니다.

루소 교수는 또한 Transformer의 기술적 능력과 한계가 때때로 의인화된다는 점을 알고 있습니다(예: 온도 매개변수를 "창의력" 제어로 보는 것). 학생들에게 온도 슬라이더(그림 1B)를 실험하도록 장려함으로써 온도가 실제로 다음 토큰의 확률 분포를 수정하는 방법(그림 2)을 보여줌으로써 결정론적이고 보다 창의적인 방식으로 예측의 무작위성을 제어하여 두 토큰 사이의 균형을 유지합니다. 출력.

또한 시스템이 토큰 처리 프로세스를 시각화하면 학생들은 여기에 소위 "마법"이 없다는 것을 알 수 있습니다. 입력 텍스트가 무엇이든(그림 1A) 모델은 잘 따릅니다. 정의된 작업 순서는 Transformer 아키텍처를 사용하여 한 번에 하나의 토큰만 샘플링한 다음 프로세스를 반복합니다.

Future Work

연구원들은 학습 경험을 개선하기 위해 도구에 대한 대화형 설명을 강화하고 있습니다. 동시에 그들은 추론 속도를 높이기 위해 WebGPU를 사용하고 있으며 모델 크기를 줄이기 위해 압축 기술을 사용하고 있습니다. 또한 Transformer explainer의 효율성과 유용성을 평가하기 위한 사용자 연구를 수행하고, AI 초보자, 학생, 교육자 및 실무자가 도구를 사용하는 방법을 관찰하고 지원하고 싶은 추가 기능에 대한 피드백을 수집할 계획입니다.

무엇을 기다리고 계시나요? 이 앱을 사용해 보고 Transformer에 대한 "마법"의 환상을 깨고 그 뒤에 숨은 원리를 진정으로 이해해 보세요.

The above is the detailed content of The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn