本文探讨了 HybridSimilarity 算法,这是一种复杂的神经网络,旨在评估文本对之间的相似性。 这种混合模型巧妙地整合了词汇、语音、语义和句法比较,以获得全面的相似度得分。
<code class="language-python">import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import TruncatedSVD from sentence_transformers import SentenceTransformer from Levenshtein import ratio as levenshtein_ratio from phonetics import metaphone import torch import torch.nn as nn class HybridSimilarity(nn.Module): def __init__(self): super().__init__() self.bert = SentenceTransformer('all-MiniLM-L6-v2') self.tfidf = TfidfVectorizer() self.attention = nn.MultiheadAttention(embed_dim=384, num_heads=4) self.fc = nn.Sequential( nn.Linear(1152, 256), nn.ReLU(), nn.LayerNorm(256), nn.Linear(256, 1), nn.Sigmoid() ) def _extract_features(self, text1, text2): # Feature Extraction features = {} # Lexical Analysis features['levenshtein'] = levenshtein_ratio(text1, text2) features['jaccard'] = len(set(text1.split()) & set(text2.split())) / len(set(text1.split()) | set(text2.split())) # Phonetic Analysis features['metaphone'] = 1.0 if metaphone(text1) == metaphone(text2) else 0.0 # Semantic Analysis (BERT) emb1 = self.bert.encode(text1, convert_to_tensor=True) emb2 = self.bert.encode(text2, convert_to_tensor=True) features['semantic_cosine'] = nn.CosineSimilarity()(emb1, emb2).item() # Syntactic Analysis (LSA-TFIDF) tfidf_matrix = self.tfidf.fit_transform([text1, text2]) svd = TruncatedSVD(n_components=1) lsa = svd.fit_transform(tfidf_matrix) features['lsa_cosine'] = np.dot(lsa[0], lsa[1].T)[0][0] # Attention Mechanism att_output, _ = self.attention( emb1.unsqueeze(0).unsqueeze(0), emb2.unsqueeze(0).unsqueeze(0), emb2.unsqueeze(0).unsqueeze(0) ) features['attention_score'] = att_output.mean().item() return torch.tensor(list(features.values())).unsqueeze(0) def forward(self, text1, text2): features = self._extract_features(text1, text2) return self.fc(features).item() def similarity_coefficient(text1, text2): model = HybridSimilarity() return model(text1, text2)</code>
HybridSimilarity 模型依赖于以下关键组件:
HybridSimilarity
类,扩展 nn.Module
,初始化:
all-MiniLM-L6-v2
)。<code class="language-python">self.bert = SentenceTransformer('all-MiniLM-L6-v2') self.tfidf = TfidfVectorizer() self.attention = nn.MultiheadAttention(embed_dim=384, num_heads=4) self.fc = nn.Sequential( nn.Linear(1152, 256), nn.ReLU(), nn.LayerNorm(256), nn.Linear(256, 1), nn.Sigmoid() )</code>
_extract_features
方法计算几个相似特征:
<code class="language-python">features['levenshtein'] = levenshtein_ratio(text1, text2) features['jaccard'] = len(set(text1.split()) & set(text2.split())) / len(set(text1.split()) | set(text2.split()))</code>
<code class="language-python">features['metaphone'] = 1.0 if metaphone(text1) == metaphone(text2) else 0.0</code>
<code class="language-python">emb1 = self.bert.encode(text1, convert_to_tensor=True) emb2 = self.bert.encode(text2, convert_to_tensor=True) features['semantic_cosine'] = nn.CosineSimilarity()(emb1, emb2).item()</code>
TruncatedSVD
应用 LSA。<code class="language-python">tfidf_matrix = self.tfidf.fit_transform([text1, text2]) svd = TruncatedSVD(n_components=1) lsa = svd.fit_transform(tfidf_matrix) features['lsa_cosine'] = np.dot(lsa[0], lsa[1].T)[0][0]</code>
<code class="language-python">att_output, _ = self.attention( emb1.unsqueeze(0).unsqueeze(0), emb2.unsqueeze(0).unsqueeze(0), emb2.unsqueeze(0).unsqueeze(0) ) features['attention_score'] = att_output.mean().item()</code>
提取的特征被组合并输入到完全连接的神经网络中。该网络输出相似度得分 (0-1)。
<code class="language-python">import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import TruncatedSVD from sentence_transformers import SentenceTransformer from Levenshtein import ratio as levenshtein_ratio from phonetics import metaphone import torch import torch.nn as nn class HybridSimilarity(nn.Module): def __init__(self): super().__init__() self.bert = SentenceTransformer('all-MiniLM-L6-v2') self.tfidf = TfidfVectorizer() self.attention = nn.MultiheadAttention(embed_dim=384, num_heads=4) self.fc = nn.Sequential( nn.Linear(1152, 256), nn.ReLU(), nn.LayerNorm(256), nn.Linear(256, 1), nn.Sigmoid() ) def _extract_features(self, text1, text2): # Feature Extraction features = {} # Lexical Analysis features['levenshtein'] = levenshtein_ratio(text1, text2) features['jaccard'] = len(set(text1.split()) & set(text2.split())) / len(set(text1.split()) | set(text2.split())) # Phonetic Analysis features['metaphone'] = 1.0 if metaphone(text1) == metaphone(text2) else 0.0 # Semantic Analysis (BERT) emb1 = self.bert.encode(text1, convert_to_tensor=True) emb2 = self.bert.encode(text2, convert_to_tensor=True) features['semantic_cosine'] = nn.CosineSimilarity()(emb1, emb2).item() # Syntactic Analysis (LSA-TFIDF) tfidf_matrix = self.tfidf.fit_transform([text1, text2]) svd = TruncatedSVD(n_components=1) lsa = svd.fit_transform(tfidf_matrix) features['lsa_cosine'] = np.dot(lsa[0], lsa[1].T)[0][0] # Attention Mechanism att_output, _ = self.attention( emb1.unsqueeze(0).unsqueeze(0), emb2.unsqueeze(0).unsqueeze(0), emb2.unsqueeze(0).unsqueeze(0) ) features['attention_score'] = att_output.mean().item() return torch.tensor(list(features.values())).unsqueeze(0) def forward(self, text1, text2): features = self._extract_features(text1, text2) return self.fc(features).item() def similarity_coefficient(text1, text2): model = HybridSimilarity() return model(text1, text2)</code>
similarity_coefficient
函数初始化模型并计算两个输入文本之间的相似度。
<code class="language-python">self.bert = SentenceTransformer('all-MiniLM-L6-v2') self.tfidf = TfidfVectorizer() self.attention = nn.MultiheadAttention(embed_dim=384, num_heads=4) self.fc = nn.Sequential( nn.Linear(1152, 256), nn.ReLU(), nn.LayerNorm(256), nn.Linear(256, 1), nn.Sigmoid() )</code>
这会返回 0 到 1 之间的浮点数,表示相似度。
HybridSimilarity 算法通过集成文本比较的各个方面,提供了一种稳健的文本相似性方法。 它将词汇、语音、语义和句法分析相结合,可以更全面、更细致地理解文本相似性,使其适用于各种应用,包括重复检测、文本聚类和信息检索。
以上是混合相似度算法的详细内容。更多信息请关注PHP中文网其他相关文章!