Home > Article > Technology peripherals > This article summarizes the classic methods and effect comparison of feature enhancement & personalization in CTR estimation.
In CTR estimation, the mainstream method uses feature embedding MLP, where features are very critical. However, for the same features, the representation is the same in different samples. This way of inputting to the downstream model will limit the expressive ability of the model.
In order to solve this problem, a series of related work has been proposed in the field of CTR estimation, called feature enhancement module. The feature enhancement module corrects the output results of the embedding layer based on different samples to adapt to the feature representation of different samples and improve the expression ability of the model.
Recently, Fudan University and Microsoft Research Asia jointly released a review on feature enhancement work, comparing the implementation methods and effects of different feature enhancement modules. Now, let’s introduce the implementation methods of several feature enhancement modules, as well as the related comparative experiments conducted in this article
Title of the paper: A Comprehensive Summarization and Evaluation of Feature Refinement Modules for CTR Prediction
Download address: https://arxiv.org/pdf/2311.04625v1.pdf
Feature enhancement module is designed to improve the CTR prediction model The expressive ability of the Embedding layer enables differentiation of representations of the same features in different samples. The feature enhancement module can be expressed by the following unified formula, input the original Embedding, and after passing a function, generate the personalized Embedding of this sample.
Picture
The general idea of this method is that after obtaining the initial embedding of each feature, use the representation of the sample itself to embedding the feature Make a transformation to get the personalized embedding of the current sample. Here we introduce some classic feature enhancement module modeling methods.
An Input-aware Factorization Machine for Sparse Prediction (IJCAI 2019) This article adds a reweight layer after the embedding layer, and inputs the initial embedding of the sample into A vector representing the sample is obtained in an MLP, and softmax is used for normalization. Each element after Softmax corresponds to a feature, representing the importance of this feature. This softmax result is multiplied by the initial embedding of each corresponding feature to achieve feature embedding weighting at sample granularity.
Picture
FiBiNET: Click-through rate prediction model combining feature importance and second-order feature interaction (RecSys 2019) also adopts a similar idea. The model learns a personalized weight of a feature for each sample. The whole process is divided into three steps: squeeze, extraction and reweight. In the squeezing stage, the embedding vector of each feature is obtained as a statistical scalar through the pooling method. In the extraction stage, these scalars are input into a multilayer perceptron (MLP) to obtain the weight of each feature. Finally, these weights are multiplied by the embedding vector of each feature to obtain the weighted embedding result, which is equivalent to filtering feature importance at the sample level
Picture
A Dual Input-aware Factorization Machine for CTR Prediction (IJCAI 2020) is similar to the previous article, and also uses self-attention to enhance features. The whole is divided into two modules: vector-wise and bit-wise. Vector-wise treats the embedding of each feature as an element in the sequence and inputs it into the Transformer to obtain the fused feature representation; the bit-wise part uses multi-layer MLP to map the original features. After the input results of the two parts are added, the weight of each feature element is obtained, and multiplied by each bit of the corresponding original feature to obtain the enhanced feature.
Image
GateNet: Enhanced gated deep network for click-through rate prediction (2020) utilizes the initial embedding vector of each feature through an MLP and sigmoid The function generates its independent feature weight scores while using an MLP to map all features into bitwise weight scores, combining the two to weight the input features. In addition to the feature layer, in the hidden layer of MLP, a similar method is also used to weight the input of each hidden layer
Picture
Interpretable Click-Through Rate Prediction through Hierarchical Attention (WSDM 2020) also uses self-attention to achieve feature conversion, but adds the generation of high-order features. Hierarchical self-attention is used here. Each layer of self-attention takes the output of the previous layer of self-attention as input. Each layer adds a first-order high-order feature combination to achieve hierarchical multi-order feature extraction. Specifically, after each layer performs self-attention, the generated new feature matrix is passed through softmax to obtain the weight of each feature. The new features are weighted according to the weights of the original features, and then a dot product is performed with the original features to achieve an increase of one feature. Characteristic intersection of levels.
Picture
ContextNet: A Click-Through Rate Prediction Framework Using Contextual information to Refine Feature Embedding (2021) is a similar approach, using an MLP to All features are mapped into a dimension of each feature embedding size, and the original features are scaled. The article uses personalized MLP parameters for each feature. In this way, each feature is enhanced using other features in the sample as upper and lower bits.
Picture
Enhancing CTR Prediction with Context-Aware Feature Representation Learning (SIGIR 2022) uses self-attention for feature enhancement, for a set of input features , each feature has a different degree of influence on other features. Through self-attention, self-attention is performed on the embedding of each feature to achieve information interaction between features within the sample. In addition to the interaction between features, the article also uses MLP for bit-level information interaction. The new embedding generated above will be merged with the original embedding through a gate network to obtain the final refined feature representation.
Picture
After comparing the effects of various feature enhancement methods, we came to the overall conclusion: Among the many feature enhancement modules, GFRL, FRNet-V, and FRNetB perform best and are better than other feature enhancement methods
## picture
The above is the detailed content of This article summarizes the classic methods and effect comparison of feature enhancement & personalization in CTR estimation.. For more information, please follow other related articles on the PHP Chinese website!