At the 2024 Worldwide Developers Conference, Apple launched Apple Intelligence, a new personalized intelligent system that can provide practical intelligent services, covering iPhone, iPad and Mac, and is deeply integrated in iOS 18, In iPadOS 18 and macOS Sequoia.
Cook once said that Apple Intelligence is a new chapter in Apple’s innovation and will change the way users use products. He emphasized that Apple's unique approach combines generative artificial intelligence and users' personal information to provide truly useful intelligent services. Additionally, Apple Intelligence provides completely private and secure access to information, helping users accomplish what matters most to them. This is an AI experience unique to Apple.
Now, more than a month has passed since the official announcement of Apple Intelligence. This technology has finally been implemented on smart devices, and the relevant technical documents have finally been released.
In the past day, users who own iPhone 15 Pro or iPhone 15 Pro Max can download the iOS 18.1 development beta and experience the features of Apple Intelligence.
With the release of this 47-page technical report, we can have a deeper understanding of the secret weapon behind Apple Intelligence.
- Report address: https://machinelearning.apple.com/papers/apple_intelligence_foundation_language_models.pdf
The report details two of the models - AFM-on-device, AFM Stands for Apple Foundation Model, a language model with approximately 3 billion parameters, and a larger server-based language model AFM-server that can perform specialized tasks efficiently, accurately, and responsibly (Figure 1).
These two base models exist as part of Apple’s larger family of generative models.
Architecture and trainingAFM basic model is a dense decoder model built on the Transformer architecture, with the following design:
- Shared input/output embedding matrix to reduce Memory usage for parameters.
- Use RMSNorm for pre-normalization to improve training stability.
- Query/key normalization to improve training stability.
- Grouped Query Attention (GQA) with 8 key-value headers to reduce KV cache memory footprint.
- SwiGLU activated for increased efficiency.
- RoPE position embedding, the base frequency (base frequency) is set to 500k to support long context.
The AFM pre-training process plays a key role in developing high-performance language models to support a range of Apple Intelligence features. The research team focuses on efficiency and data quality to achieve a high-quality end-to-end user experience.
In terms of post-training, the research team found that improving general post-training can improve the performance of all Apple Intelligence features because the model will have a stronger ability to follow instructions, reason, and write.
To ensure that these model functions are consistent with Apple’s commitment to protecting user privacy and Apple’s Responsible AI principles, post-training work includes a series of data collection and generation, instruction adjustment and alignment innovation. The post-training process consists of two stages: supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF). The research team proposed two new post-training algorithms: (1) a rejection sampling fine-tuning algorithm with teacher committee (iTeC), and (2) an RLHF algorithm for reinforcement learning iterations with mirror-descent policy optimization ( mirror descent policy optimization) and leave-one-out advantage estimator (MDLOO), significantly improving model quality.
Apple Intelligence FeaturesThe base model is specially designed for Apple Intelligence, a personal intelligence system that supports iPhone, iPad and Mac. Apple found that they could improve the performance of small models to state-of-the-art levels by fine-tuning them for specific tasks, and in addition, they developed an architecture based on runtime-swappable adapters , enabling a single base model to be specialized for dozens of such tasks. Figure 2 shows a high-level overview.
Apple uses LoRA adapters to fine-tune models for specific tasks. For each task, we adjust all linear projection matrices in the AFM self-attention layer and fully connected layers in the point-wise feedforward network. By simply fine-tuning the adapter, the original parameters of the base pre-trained model remain unchanged, allowing general knowledge of the model to be preserved while tailoring the adapter to support specific tasks. To incorporate AFM into edge devices with limited memory budgets and reduce inference costs, quantization techniques need to be considered. Previous research has found that 4-bit quantized models suffer small losses compared to raw 32/16-bit floating point. To achieve the best balance between model capacity and inference performance, Apple developed state-of-the-art quantization methods and a framework that leverages accuracy-recovery adapters. This allows the model to achieve nearly lossless quantization when the average weight of each weight is less than 4 bits, and provides flexible quantization scheme selection. After post-training, the model is compressed and quantized to obtain an average weight of less than 4 bits. Quantitative models typically exhibit moderate quality loss. Therefore, Apple will not use the quantized model directly for feature development, but attach a set of parameter-efficient LoRA adapters for quality recovery. It is worth noting that the training accuracy-recovery adapter is sample efficient and can be thought of as a mini version of the training base model. In the pre-training phase of the adapter, only about 10 billion tokens (about 0.15% of the base model training) are needed to fully restore the capabilities of the quantized model. Since the application adapters will be fine-tuned from these accuracy-recovery adapters, they will not incur any additional memory usage or inference cost. Regarding adapter size, Apple has found that an adapter rank of 16 provides the best trade-off between model capacity and inference performance. However, for flexibility, Apple provides a set of accuracy-recovery adapters with different ranks {8, 16, 32} for application teams to choose from. Mixed precision quantization Residual connections exist for every transformer block and every layer in AFM. Therefore, it is unlikely that all layers are of equal importance. Following this intuition, Apple further reduced memory usage by pushing certain layers to use 2-bit quantization (the default is 4-bit). On average, AFM-on-device can compress to only about 3.5 bits per weight (bpw) without significant quality loss. The research team uses common open source evaluation tools and benchmarks to evaluate the AFM pre-trained model. Table 2 shows the results of AFM-on-device and AFM-server on HELM MMLU v1.5.0.
These benchmarks show that the AFM pre-trained model has strong language and inference capabilities, providing a solid foundation for post-training and feature fine-tuning.
The comparison results of AFM with open source models (Phi-3, Gemma-1.1, Llama-3, Mistral, DBRX-Instruct) and commercial models (GPT3.5 and GPT-4) are as follows 3 shown. AFM models are preferred by human evaluators compared to other models. In particular, AFM-on-device achieved a 47.7% win rate compared to Phi-3-mini despite a 25% smaller model size, even better than the open source strong baselines Gemma-7B and Mistral-7B.
To measure the model’s ability to generate responses that follow instructions in prompts, the research team evaluated AFM-on-device and AFM-server on the IFEval benchmark, with the results shown in Figure 4 below:
As shown in Figure 5, AFM-server achieves the best overall accuracy, better than Gemini-1.5-Pro-Preview-0514 and GPT-4.
Apple compared AFM to some of the best models as well as smaller open source models. As shown in Figure 6, AFM-on-device can achieve equivalent or better performance compared to Gemma-7B and Mistral-7B. The performance of AFM-server is significantly better than DBRX-Instruct and GPT3.5, and is comparable to GPT4.
Figure 7 compares the performance of post-trained AFM on mathematical benchmarks. It was found that AFM-on-device performed significantly better than Mistral-7B and Gemma-7B, even though it was less than half their size.
The figure below shows human raters evaluating the quality of AFM-on-device adapters, Phi-3-mini, Llama-3-8B and Gemma-7B on the summary task. Figure 8 shows that AFM-on-device-adapter generally outperforms other models.
Apple Intelligence is developed and designed with user privacy in mind. Figure 9 summarizes the violation rates given by human raters on different models, the lower the better. Both AFM-on-device and AFM-server are robust to adversarial prompts, with significantly lower violation rates than open source and commercial models.
Figure 10 shows that the AFM model is preferred by human raters compared to other models. The above is the detailed content of Revealed! A 47-page document dismantling Apple's intelligence, from architecture and data to training and optimization. For more information, please follow other related articles on the PHP Chinese website!
Statement:The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn