Home  >  Article  >  Technology peripherals  >  Intel Meteor Lake processor integrates NPU, bringing the AI ​​era to PCs

Intel Meteor Lake processor integrates NPU, bringing the AI ​​era to PCs

王林
王林forward
2023-09-20 20:57:061095browse

Intel today released the latest Meteor Lake processor and introduced Meteor Lake’s integrated NPU in detail

英特尔Meteor Lake处理器集成NPU,为PC带来AI时代

Intel said that Al is penetrating into every aspect of people's lives. Although Cloud Al provides scalable computing capabilities, there are some limitations. It relies on a network connection, has higher latency, and is more expensive to implement, while also having privacy issues. MeteorLake introduces Al to the client's personal computer, providing low-latency Al computing capabilities and better protecting data privacy at a lower cost

Intel said that starting from MeteorLake, Intel will widely introduce Al to PCs, leading hundreds of millions of PCs into the Al era, and the huge x86 ecosystem will provide a wide range of software models and tools.

IT Home with detailed explanation of Intel NPU architecture:

Host Interface and Device Management - The Device Management area supports Microsoft's new driver model called the Microsoft Computing Driver Model (MCDM). This enables Meteor Lake’s NPUs to support MCDM in a superior manner while ensuring security, while the memory management unit (MMU) provides isolation in multiple scenarios and supports power and workload scheduling, enabling fast low-power states Convert.

Multi-engine architecture - The NPU consists of a multi-engine architecture equipped with two neural computing engines that can jointly process a single workload or each handle different workloads. In the Neural Compute Engine, there are two main computing components. One is the inference pipeline - this is the core driver of energy-efficient computing and handles common large calculations by minimizing data movement and leveraging fixed-function operations. Tasks can achieve energy efficiency in neural network execution. The vast majority of computation occurs in the inference pipeline, a fixed-function pipeline hardware that supports standard neural network operations. The pipeline consists of a multiply-accumulate operation (MAC) array, an activation function block, and a data conversion block. The second is SHAVEDSP - a highly optimized VLIW DSP (Very Long Instruction Word/Digital Signal Processor) designed specifically for Al. The Streaming Hybrid Architecture Vector Engine (SHAVE) can be pipelined with inference pipelines and direct memory access (DMA) engines to enable truly heterogeneous computing in parallel on the NPU to maximize performance.

The DMA engine is an engine that optimizes data movement and improves energy efficiency performance

Rewrite the content without changing the original meaning. The language that needs to be rewritten is Chinese, and the original sentence does not need to appear

The above is the detailed content of Intel Meteor Lake processor integrates NPU, bringing the AI ​​era to PCs. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete