search
HomeBackend DevelopmentPython TutorialThe Unreasonable Usefulness of numpy&#s einsum

Introduction

I'd like to introduce you to the most useful method in Python, np.einsum.

With np.einsum (and its counterparts in Tensorflow and JAX), you can write complicated matrix and tensor operations in an extremely clear and succinct way. I've also found that its clarity and succinctness relieves a lot of the mental overload that comes with working with tensors.

And it's actually fairly simple to learn and use. Here's how it works:

In np.einsum, you have a subscripts string argument and you have one or more operands:

numpy.einsum(subscripts : string, *operands : List[np.ndarray])

The subscripts argument is a "mini-language" that tells numpy how to manipulate and combine the axes of the operands. It's a little difficult to read at first, but it's not bad when you get the hang of it.

Single Operands

For a first example, let's use np.einsum to swap the axes of (a.k.a. take the transpose) a matrix A:

M = np.einsum('ij->ji', A)

The letters i and j are bound to the first and second axes of A. Numpy binds letters to axes in the order they appear, but numpy doesn't care what letters you use if you are explicit. We could have used a and b, for example, and it works the same way:

M = np.einsum('ab->ba', A)

However, you must supply as many letters as there are axes in the operand. There are two axes in A, so you must supply two distinct letters. The next example won't work because the subscripts formula only has one letter to bind, i:

# broken
M = np.einsum('i->i', A)

On the other hand, if the operand does indeed have only one axis (i.o.w., it is a vector), then the single-letter subscript formula works just fine, although it isn't very useful because it leaves the vector a as-is:

m = np.einsum('i->i', a)

Summing Over Axes

But what about this operation? There's no i on the right-hand-side. Is this valid?

c = np.einsum('i->', a)

Surprisingly, yes!

Here is the first key to understanding the essence of np.einsum: If an axis is omitted from the right-hand-side, then the axis is summed over.

The Unreasonable Usefulness of numpy

Code:

c = 0
I = len(a)
for i in range(I):
   c += a[i]

The summing-over behavior isn't limited to a single axis. For example, you can sum over two axes at once by using this subscript formula: c = np.einsum('ij->', A):

The Unreasonable Usefulness of numpy

Here is the corresponding Python code for something over both axes:

c = 0
I,J = A.shape
for i in range(I):
   for j in range(J):
      c += A[i,j]

But it doesn't stop there - we can get creative and sum some axes and leave others alone. For example: np.einsum('ij->i', A) sums the rows of matrix A, leaving a vector of row sums of length j:

The Unreasonable Usefulness of numpy

Code:

numpy.einsum(subscripts : string, *operands : List[np.ndarray])

Likewise, np.einsum('ij->j', A) sums columns in A.

The Unreasonable Usefulness of numpy

Code:

M = np.einsum('ij->ji', A)

Two Operands

There's a limit to what we can do with a single operand. Things get a lot more interesting (and useful) with two operands.

Let's suppose you have two vectors a = [a_1, a_2, ... ] and b = [a_1, a_2, ...].

If len(a) === len(b), we can compute the inner product (also called the dot product) like this:

M = np.einsum('ab->ba', A)

Two things are happening here simultaneously:

  1. Because i is bound to both a and b, a and b are "lined up" and then multiplied together: a[i] * b[i].
  2. Because the index i is excluded from the right-hand-side, axis i is summed over in order to eliminate it.

If you put (1) and (2) together, you get the classic inner product.

The Unreasonable Usefulness of numpy

Code:

# broken
M = np.einsum('i->i', A)

Now, let's suppose that we didn't omit i from the subscript formula, we would multiply all a[i] and b[i] and not sum over i:

m = np.einsum('i->i', a)

The Unreasonable Usefulness of numpy

Code:

c = np.einsum('i->', a)

This is also called element-wise multiplication (or the Hadamard Product for matrices), and is typically done via the numpy method np.multiply.

There's yet a third variation of the subscript formula, which is called the outer product.

c = 0
I = len(a)
for i in range(I):
   c += a[i]

In this subscript formula, the axes of a and b are bound to separate letters, and thus are treated as separate "loop variables". Therefore C has entries a[i] * b[j] for all i and j, arranged into a matrix.

The Unreasonable Usefulness of numpy

Code:

c = 0
I,J = A.shape
for i in range(I):
   for j in range(J):
      c += A[i,j]

Three Operands

Taking the outer product a step further, here's a three-operand version:

I,J = A.shape
r = np.zeros(I)
for i in range(I):
   for j in range(J):
      r[i] += A[i,j]

The Unreasonable Usefulness of numpy

The equivalent Python code for our three-operand outer product is:

I,J = A.shape
r = np.zeros(J)
for i in range(I):
   for j in range(J):
      r[j] += A[i,j]

Going even further, there's nothing stopping us from omitting axes to sum over them in addition to transposing the result by writing ki instead of ik on the right-hand-side of ->:

numpy.einsum(subscripts : string, *operands : List[np.ndarray])

The equivalent Python code would read:

M = np.einsum('ij->ji', A)

Now I hope you can begin to see how you can specify complicated tensor operations rather easily. When I worked more extensively with numpy, I found myself reaching for np.einsum any time I had to implement a complicated tensor operation.

In my experience, np.einsum makes for easier code reading later - I can readily read off the above operation straight from the subscripts: "The outer product of three vectors, with the middle axes summed over, and the final result transposed". If I had to read a complicated series of numpy operations, I might find myself tongue tied.

A Practical Example

For a practical example, let's implement the equation at the heart of LLMs, from the classic paper "Attention is All You Need".

Eq. 1 describes the Attention Mechanism:

The Unreasonable Usefulness of numpy

We'll focus our attention on the term QKTQK^T QKT , because softmax isn't computible by np.einsum and the scaling factor 1dkfrac{1}{sqrt{d_k}}dk1 is trivial to apply.

The QKTQK^T QKT term represents the dot products of m queries with n keys. Q is a collection of m d-dimensional row vectors stacked into a matrix, so Q has the shape md. Likewise, K is a collection of n d-dimensional row vectors stacked into a matrix, so K has the shape md.

The product between a single Q and K would be written as:

np.einsum('md,nd->mn', Q, K)

Note that because of the way we wrote our subscripts equation, we avoided having to transpose K prior to matrix multiplication!

The Unreasonable Usefulness of numpy

So, that seems pretty straightforward - in fact, it's just a traditional matrix multiplication. However, we're not done yet. Attention Is All You Need uses multi-head attention, which means we really have k such matrix multiplies happening simultaneously over an indexed collection of Q matrices and K matrices.

To make things a bit clearer, we might rewrite the product as QiKiTQ_iK_i^T QiKiT .

That means we have an additional axis i for both Q and K.

And what's more, if we are in a training setting, we are probably executing a batch of such multi-headed attention operations.

So presumably would want to perform the operation over a batch of examples along a batch axis b. Thus, the complete product would be something like:

numpy.einsum(subscripts : string, *operands : List[np.ndarray])

I'm going to skip the diagram here because we're dealing with 4-axis tensors. But you might be able to picture "stacking" the earlier diagram to get our multi-head axis i, and then "stacking" those "stacks" to get our batch axis b.

It's difficult for me to see how we would implement such an operation with any combination of the other numpy methods. Yet, with a little bit of inspection, it's clear what's happening: Over a batch, over a collection of matrices Q and K, perform the matrix multiplication Qt(K).

Now, isn't that wonderful?

Shameless Plug

After doing the founder mode grind for a year, I'm looking for work. I've got over 15 years experience in a wide variety of technical fields and programming languages and also experience managing teams. Math and statistics are focus areas. DM me and let's talk!

The above is the detailed content of The Unreasonable Usefulness of numpy&#s einsum. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Merging Lists in Python: Choosing the Right MethodMerging Lists in Python: Choosing the Right MethodMay 14, 2025 am 12:11 AM

TomergelistsinPython,youcanusethe operator,extendmethod,listcomprehension,oritertools.chain,eachwithspecificadvantages:1)The operatorissimplebutlessefficientforlargelists;2)extendismemory-efficientbutmodifiestheoriginallist;3)listcomprehensionoffersf

How to concatenate two lists in python 3?How to concatenate two lists in python 3?May 14, 2025 am 12:09 AM

In Python 3, two lists can be connected through a variety of methods: 1) Use operator, which is suitable for small lists, but is inefficient for large lists; 2) Use extend method, which is suitable for large lists, with high memory efficiency, but will modify the original list; 3) Use * operator, which is suitable for merging multiple lists, without modifying the original list; 4) Use itertools.chain, which is suitable for large data sets, with high memory efficiency.

Python concatenate list stringsPython concatenate list stringsMay 14, 2025 am 12:08 AM

Using the join() method is the most efficient way to connect strings from lists in Python. 1) Use the join() method to be efficient and easy to read. 2) The cycle uses operators inefficiently for large lists. 3) The combination of list comprehension and join() is suitable for scenarios that require conversion. 4) The reduce() method is suitable for other types of reductions, but is inefficient for string concatenation. The complete sentence ends.

Python execution, what is that?Python execution, what is that?May 14, 2025 am 12:06 AM

PythonexecutionistheprocessoftransformingPythoncodeintoexecutableinstructions.1)Theinterpreterreadsthecode,convertingitintobytecode,whichthePythonVirtualMachine(PVM)executes.2)TheGlobalInterpreterLock(GIL)managesthreadexecution,potentiallylimitingmul

Python: what are the key featuresPython: what are the key featuresMay 14, 2025 am 12:02 AM

Key features of Python include: 1. The syntax is concise and easy to understand, suitable for beginners; 2. Dynamic type system, improving development speed; 3. Rich standard library, supporting multiple tasks; 4. Strong community and ecosystem, providing extensive support; 5. Interpretation, suitable for scripting and rapid prototyping; 6. Multi-paradigm support, suitable for various programming styles.

Python: compiler or Interpreter?Python: compiler or Interpreter?May 13, 2025 am 12:10 AM

Python is an interpreted language, but it also includes the compilation process. 1) Python code is first compiled into bytecode. 2) Bytecode is interpreted and executed by Python virtual machine. 3) This hybrid mechanism makes Python both flexible and efficient, but not as fast as a fully compiled language.

Python For Loop vs While Loop: When to Use Which?Python For Loop vs While Loop: When to Use Which?May 13, 2025 am 12:07 AM

Useaforloopwheniteratingoverasequenceorforaspecificnumberoftimes;useawhileloopwhencontinuinguntilaconditionismet.Forloopsareidealforknownsequences,whilewhileloopssuitsituationswithundeterminediterations.

Python loops: The most common errorsPython loops: The most common errorsMay 13, 2025 am 12:07 AM

Pythonloopscanleadtoerrorslikeinfiniteloops,modifyinglistsduringiteration,off-by-oneerrors,zero-indexingissues,andnestedloopinefficiencies.Toavoidthese:1)Use'i

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),