search
HomeTechnology peripheralsAIRevealed! A 47-page document dismantling Apple's intelligence, from architecture and data to training and optimization

At the 2024 Worldwide Developers Conference, Apple launched Apple Intelligence, a new personalized intelligent system that can provide practical intelligent services, covering iPhone, iPad and Mac, and is deeply integrated in iOS 18, In iPadOS 18 and macOS Sequoia.

Cook once said that Apple Intelligence is a new chapter in Apple’s innovation and will change the way users use products. He emphasized that Apple's unique approach combines generative artificial intelligence and users' personal information to provide truly useful intelligent services. Additionally, Apple Intelligence provides completely private and secure access to information, helping users accomplish what matters most to them. This is an AI experience unique to Apple.

Now, more than a month has passed since the official announcement of Apple Intelligence. This technology has finally been implemented on smart devices, and the relevant technical documents have finally been released.

In the past day, users who own iPhone 15 Pro or iPhone 15 Pro Max can download the iOS 18.1 development beta and experience the features of Apple Intelligence.

With the release of this 47-page technical report, we can have a deeper understanding of the secret weapon behind Apple Intelligence.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

  • Report address: https://machinelearning.apple.com/papers/apple_intelligence_foundation_language_models.pdf

The report details two of the models - AFM-on-device, AFM Stands for Apple Foundation Model, a language model with approximately 3 billion parameters, and a larger server-based language model AFM-server that can perform specialized tasks efficiently, accurately, and responsibly (Figure 1).

These two base models exist as part of Apple’s larger family of generative models.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

Architecture and training

AFM basic model is a dense decoder model built on the Transformer architecture, with the following design:

  • Shared input/output embedding matrix to reduce Memory usage for parameters.
  • Use RMSNorm for pre-normalization to improve training stability.
  • Query/key normalization to improve training stability.
  • Grouped Query Attention (GQA) with 8 key-value headers to reduce KV cache memory footprint.
  • SwiGLU activated for increased efficiency.
  • RoPE position embedding, the base frequency (base frequency) is set to 500k to support long context.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

The AFM pre-training process plays a key role in developing high-performance language models to support a range of Apple Intelligence features. The research team focuses on efficiency and data quality to achieve a high-quality end-to-end user experience.

In terms of post-training, the research team found that improving general post-training can improve the performance of all Apple Intelligence features because the model will have a stronger ability to follow instructions, reason, and write.

To ensure that these model functions are consistent with Apple’s commitment to protecting user privacy and Apple’s Responsible AI principles, post-training work includes a series of data collection and generation, instruction adjustment and alignment innovation. The post-training process consists of two stages: supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF). The research team proposed two new post-training algorithms: (1) a rejection sampling fine-tuning algorithm with teacher committee (iTeC), and (2) an RLHF algorithm for reinforcement learning iterations with mirror-descent policy optimization ( mirror descent policy optimization) and leave-one-out advantage estimator (MDLOO), significantly improving model quality.

Apple Intelligence Features

The base model is specially designed for Apple Intelligence, a personal intelligence system that supports iPhone, iPad and Mac.

Apple found that they could improve the performance of small models to state-of-the-art levels by fine-tuning them for specific tasks, and in addition, they developed an architecture based on runtime-swappable adapters , enabling a single base model to be specialized for dozens of such tasks. Figure 2 shows a high-level overview.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

Adapter Architecture

Apple uses LoRA adapters to fine-tune models for specific tasks. For each task, we adjust all linear projection matrices in the AFM self-attention layer and fully connected layers in the point-wise feedforward network. By simply fine-tuning the adapter, the original parameters of the base pre-trained model remain unchanged, allowing general knowledge of the model to be preserved while tailoring the adapter to support specific tasks.

Quantization

To incorporate AFM into edge devices with limited memory budgets and reduce inference costs, quantization techniques need to be considered. Previous research has found that 4-bit quantized models suffer small losses compared to raw 32/16-bit floating point.

To achieve the best balance between model capacity and inference performance, Apple developed state-of-the-art quantization methods and a framework that leverages accuracy-recovery adapters. This allows the model to achieve nearly lossless quantization when the average weight of each weight is less than 4 bits, and provides flexible quantization scheme selection.

Method

After post-training, the model is compressed and quantized to obtain an average weight of less than 4 bits. Quantitative models typically exhibit moderate quality loss. Therefore, Apple will not use the quantized model directly for feature development, but attach a set of parameter-efficient LoRA adapters for quality recovery.

It is worth noting that the training accuracy-recovery adapter is sample efficient and can be thought of as a mini version of the training base model. In the pre-training phase of the adapter, only about 10 billion tokens (about 0.15% of the base model training) are needed to fully restore the capabilities of the quantized model.

Since the application adapters will be fine-tuned from these accuracy-recovery adapters, they will not incur any additional memory usage or inference cost. Regarding adapter size, Apple has found that an adapter rank of 16 provides the best trade-off between model capacity and inference performance.

However, for flexibility, Apple provides a set of accuracy-recovery adapters with different ranks {8, 16, 32} for application teams to choose from.

Mixed precision quantization

Residual connections exist for every transformer block and every layer in AFM. Therefore, it is unlikely that all layers are of equal importance. Following this intuition, Apple further reduced memory usage by pushing certain layers to use 2-bit quantization (the default is 4-bit). On average, AFM-on-device can compress to only about 3.5 bits per weight (bpw) without significant quality loss.

Evaluation

The research team uses common open source evaluation tools and benchmarks to evaluate the AFM pre-trained model. Table 2 shows the results of AFM-on-device and AFM-server on HELM MMLU v1.5.0.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

These benchmarks show that the AFM pre-trained model has strong language and inference capabilities, providing a solid foundation for post-training and feature fine-tuning.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

The comparison results of AFM with open source models (Phi-3, Gemma-1.1, Llama-3, Mistral, DBRX-Instruct) and commercial models (GPT3.5 and GPT-4) are as follows 3 shown. AFM models are preferred by human evaluators compared to other models. In particular, AFM-on-device achieved a 47.7% win rate compared to Phi-3-mini despite a 25% smaller model size, even better than the open source strong baselines Gemma-7B and Mistral-7B.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

To measure the model’s ability to generate responses that follow instructions in prompts, the research team evaluated AFM-on-device and AFM-server on the IFEval benchmark, with the results shown in Figure 4 below:

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

As shown in Figure 5, AFM-server achieves the best overall accuracy, better than Gemini-1.5-Pro-Preview-0514 and GPT-4.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

Apple compared AFM to some of the best models as well as smaller open source models. As shown in Figure 6, AFM-on-device can achieve equivalent or better performance compared to Gemma-7B and Mistral-7B. The performance of AFM-server is significantly better than DBRX-Instruct and GPT3.5, and is comparable to GPT4.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

Figure 7 compares the performance of post-trained AFM on mathematical benchmarks. It was found that AFM-on-device performed significantly better than Mistral-7B and Gemma-7B, even though it was less than half their size.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

The figure below shows human raters evaluating the quality of AFM-on-device adapters, Phi-3-mini, Llama-3-8B and Gemma-7B on the summary task. Figure 8 shows that AFM-on-device-adapter generally outperforms other models.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

Responsible AI

Apple Intelligence is developed and designed with user privacy in mind.

Figure 9 summarizes the violation rates given by human raters on different models, the lower the better. Both AFM-on-device and AFM-server are robust to adversarial prompts, with significantly lower violation rates than open source and commercial models.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

Figure 10 shows that the AFM model is preferred by human raters compared to other models.

Revealed! A 47-page document dismantling Apples intelligence, from architecture and data to training and optimization

The above is the detailed content of Revealed! A 47-page document dismantling Apple's intelligence, from architecture and data to training and optimization. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor