Home >Technology peripherals >AI >Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni

PHPz
PHPzforward
2023-09-20 13:05:09813browse

Enthusiasts have revealed the "secret" of Apple's Transformer

Under the influence of the wave of large models, even conservative Apple will definitely mention "Transformer" at every press conference

For example, at this year’s WWDC, Apple announced that new versions of iOS and macOS will have a built-in Transformer language model to provide an input method with text prediction capabilities.

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni

Although Apple officials did not reveal more information, technology enthusiasts can’t wait

A guy named Jack Cook successfully translated Opened a new chapter of macOS Sonoma beta, and unexpectedly discovered a lot of new information

  • In terms of model architecture, Brother Cook believes that Apple’s language model is more based on GPT-2built.
  • In terms of tokenizer , emoticons are very prominent among them.
For more details, let’s take a look.

Based on GPT-2 architecture

First of all, let us review the functions that Apple’s Transformer-based language model can achieve on iPhone, MacBook and other devices

Need to be rewritten The content is: mainly reflected in the input method. With the support of the language model, Apple's own input method can achieve word prediction and error correction functions

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni

Jack Cook tested it in detail and found that this function mainly What is achieved is prediction for a single word.

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni
The content that needs to be rewritten is: △Source: Jack Cook’s blog post
The model also sometimes predicts multiple upcoming words, but this It is limited to situations where the sentence semantics are very obvious, and is more similar to the auto-complete function in Gmail.

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni
The content that needs to be rewritten is: △Source: Jack Cook’s blog post
So where is this model installed? After some in-depth digging, Brother Cook determined:

I found the predictive text model in /System/Library/LinguisticData/RequiredAssets_en.bundle/AssetData/en.lm/unilm.bundle.

The reason is:

    Many files in unilm.bundle do not exist in macOS Ventura (13.5) and only appear in the new version of macOS Sonoma beta (14.0).
  1. There is a sp.dat file in unilm.bundle, which can be found in Ventura and Sonoma beta, but the Sonoma beta version has been updated with a set of tokens that obviously look like a tokenizer.
  2. The number of tokens in sp.dat can match the two files in unilm.bundle - unilm_joint_cpu.espresso.shape and unilm_joint_ane.espresso.shape. These two files describe the shape of each layer in the Espresso/CoreML model.
Further speculation, based on the network structure described in unilm_joint_cpu, I believe that the Apple model is built based on the GPT-2 architecture

The main components include token embedding and position encoding , decoder block and output layer, words similar to "gpt2_transformer_layer_3d" will appear in each decoder block

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni
The content that needs to be rewritten is: △Source: Jack Cook's blog post

Based on the size of each layer, I also speculated that the Apple model has about 34 million parameters and the hidden layer size is 512. In other words, it is smaller than the smallest version of GPT-2

I believe this is mainly because Apple wants a model that is not too power-consuming, but can run quickly and frequently at the same time.

And Apple’s official statement at WWDC is, “Every time a key is clicked, the iPhone will run the model once.”

However, this also means that this text prediction model cannot well continue sentences or paragraphs

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni
The content that needs to be rewritten is: △Source: In addition to Jack Cook's blog post

Model architecture, Brother Cook also dug up relevant information about the tokenizer

(tokenizer).

He found a set of 15,000 tokens in unilm.bundle/sp.dat. It is worth noting that it contains 100 emoji.

Cook Reveals Cook

Although this Cook is not that Cook, my blog post still attracted a lot of attention as soon as it was published

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni

Based on his findings, netizens enthusiastically discussed Apple’s approach to balancing user experience and cutting-edge technology applications.

Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni

Back to Jack Cook himself, he graduated from MIT with a bachelor's degree and a master's degree in computer science, and is currently studying for a master's degree in Internet social sciences from Oxford University.

He previously interned at NVIDIA, focusing on researching language models such as BERT. He also serves as a senior R&D engineer for natural language processing at The New York Times

The above is the detailed content of Hidden robot in iPhone: Based on GPT-2 architecture, with emoji tokenizer, developed by MIT alumni. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete