Home >Backend Development >Golang >How can you optimize an 8-bit positional popcount algorithm using assembly, specifically by focusing on the inner loop and utilizing techniques like prefetching and scalar population counting?
In the provided code, the function __mm_add_epi32_inplace_purego can be optimized using assembly to improve its performance. The inner loop in particular can be optimized for faster execution.
The provided algorithm for counting positional population is called a "positional population count." This algorithm is used in machine learning and involves counting the number of set bits in a series of bytes. In the given code, _mm_add_epi32_inplace_purego is called in two levels of loop, and the goal is to optimize the inner loop.
The provided code primarily works on an array of 8-bit integers called counts. The inner loop iterates over a byte slice, and for each byte, it adds the corresponding bit positions from an array of bit patterns (_expand_byte) to the counts array. The _expand_byte array contains bit patterns that expand each byte into its individual bits.
To optimize the inner loop using assembly, you need to keep the counters in general-purpose registers for better performance and prefetch memory well in advance to improve streaming behavior. You can also implement scalar population counting using a simple shift and add combination (SHRL/ADCL).
An example of optimized assembly code is provided below. This code is written for a specific processor architecture and may need to be modified to run on other systems.
<code class="assembly">#include "textflag.h" // func PospopcntReg(counts *[8]int32, buf []byte) TEXT ·PospopcntReg(SB),NOSPLIT,-32 MOVQ counts+0(FP), DI MOVQ buf_base+8(FP), SI // SI = &buf[0] MOVQ buf_len+16(FP), CX // CX = len(buf) // load counts into register R8--R15 MOVL 4*0(DI), R8 MOVL 4*1(DI), R9 MOVL 4*2(DI), R10 MOVL 4*3(DI), R11 MOVL 4*4(DI), R12 MOVL 4*5(DI), R13 MOVL 4*6(DI), R14 MOVL 4*7(DI), R15 SUBQ , CX // pre-subtract 32 bit from CX JL scalar vector: VMOVDQU (SI), Y0 // load 32 bytes from buf PREFETCHT0 384(SI) // prefetch some data ADDQ , SI // advance SI past them VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R15 // add to counter VPADDD Y0, Y0, Y0 // shift Y0 left by one place VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R14 // add to counter VPADDD Y0, Y0, Y0 // shift Y0 left by one place VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R13 // add to counter VPADDD Y0, Y0, Y0 // shift Y0 left by one place VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R12 // add to counter VPADDD Y0, Y0, Y0 // shift Y0 left by one place VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R11 // add to counter VPADDD Y0, Y0, Y0 // shift Y0 left by one place VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R10 // add to counter VPADDD Y0, Y0, Y0 // shift Y0 left by one place VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R9 // add to counter VPADDD Y0, Y0, Y0 // shift Y0 left by one place VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX POPCNTL AX, AX // count population of AX ADDL AX, R8 // add to counter SUBQ , CX JGE vector // repeat as long as bytes are left scalar: ADDQ , CX // undo last subtraction JE done // if CX=0, there's nothing left loop: MOVBLZX (SI), AX // load a byte from buf INCQ SI // advance past it SHRL , AX // CF=LSB, shift byte to the right ADCL , R8 // add CF to R8 SHRL , AX ADCL , R9 // add CF to R9 SHRL , AX ADCL , R10 // add CF to R10 SHRL , AX ADCL , R11 // add CF to R11 SHRL , AX ADCL , R12 // add CF to R12 SHRL , AX ADCL , R13 // add CF to R13 SHRL , AX ADCL , R14 // add CF to R14 SHRL , AX ADCL , R15 // add CF to R15 DECQ CX // mark this byte as done JNE loop // and proceed if any bytes are left // write R8--R15 back to counts done: MOVL R8, 4*0(DI) MOVL R9, 4*1(DI) MOVL R10, 4*2(DI) MOVL R11, 4*3(DI) MOVL R12, 4*4(DI) MOVL R13, 4*5(DI) MOVL R14, 4*6(DI) MOVL R15, 4*7(DI) VZEROUPPER // restore SSE-compatibility RET</code>
In summary, the optimization involves using general-purpose registers for counters, prefetching memory in advance, and implementing scalar population counting using SHRL/ADCL. This approach can significantly improve the performance of the positional population count algorithm.
The above is the detailed content of How can you optimize an 8-bit positional popcount algorithm using assembly, specifically by focusing on the inner loop and utilizing techniques like prefetching and scalar population counting?. For more information, please follow other related articles on the PHP Chinese website!