search
HomeBackend DevelopmentPython TutorialDetailed example of word vector embedding

Detailed example of word vector embedding

Jun 21, 2017 pm 04:11 PM
vectorstudynotes

Word vector embedding requires efficient processing of large-scale text corpora. word2vec. In a simple way, the word is sent to the one-hot encoding learning system, the length is a vector of the length of the vocabulary, the corresponding position element of the word is 1, and the other elements are 0. The vector dimension is very high and cannot describe the semantic association of different words. Co-occurrence represents words, resolves semantic associations, traverses a large-scale text corpus, counts the surrounding words within a certain distance of each word, and represents each word with the normalized number of nearby words. Words in similar contexts have similar semantics. Use PCA or similar methods to reduce the dimensionality of the occurrence vector to obtain a denser representation. It has good performance and tracks all vocabulary co-occurrence matrices. The width and height are the vocabulary length. In 2013, Mikolov, Tomas and others proposed a context calculation word representation method, "Efficient estimation of word representations in vector space" (arXiv preprint arXiv:1301.3781(2013)). The skip-gram model starts from a random representation and predicts a simple classifier of context words based on the current word. The error is propagated through the classifier weight and word representation, and the two are adjusted to reduce the prediction error. The large-scale corpus training model representation vector approximates the compressed co-occurrence vector.

Dataset, English Wikipedia dump file contains the complete revision history of all pages, the current page version is 100GB.

Download the dump file and extract the page words. Count the number of occurrences of words and build a common vocabulary list. Encode the extracted pages using a vocabulary. The file is read line by line and the results are written immediately to disk. Save checkpoints between different steps to avoid program crashes.

__iter__Traverses the word index list page. encode obtains the vocabulary index of the string word. decode returns the string word according to the vocabulary index. _read_pages extracts words from a Wikipedia dump file (compressed XML) and saves them to a pages file, with one line of space-delimited words per page. The bz2 module open function reads files. Intermediate result compression processing. Regular expressions capture any sequence of consecutive letters or individual special letters. _build_vocabulary counts the number of words in the page file, and words with high frequency are written into the file. One-hot encoding requires a vocabulary. Glossary index encoding. Spelling errors and extremely uncommon words are removed, and the vocabulary only contains vocabulary_size - 1 most common words. All words that are not in the vocabulary are marked with and do not appear in word vectors.

Dynamicly form training samples, organize a large amount of data, and the classifier does not occupy a large amount of memory. The skip-gram model predicts the context words of the current word. Traverse the text, current word data, surrounding word targets, and create training samples. Context size R, each word generates 2R samples, R words to the left and right of the current word. Semantic context, close distance is important, create as few training samples of far-context words as possible, and randomly select the word context size in the range [1, D=10]. Training pairs are formed based on the skip-gram model. Numpy arrays generate numerical stream batch data.

Initially, words are represented as random vectors. The classifier predicts the current representation of the context word based on the mid-level representation. Propagate errors, fine-tune weights, and input word representations. MomentumOptimizer model optimization, lack of intelligence and high efficiency.

The classifier is the core of the model. Noise contrastive estimation loss has excellent performance. Softmax classifier modeling. tf.nn.nce_loss New random vector negative sample (comparison sample), approximate softmax classifier.

The training model ends and the final word vector is written to the file. A subset of the Wikipedia corpus was trained on a normal CPU for 5 hours, and the NumPy array embedding representation was obtained. Complete corpus: . The AttrDict class is equivalent to a Python dict, with keys accessible as attributes.

import bz2
   import collections
   import os
   import re
   from lxml import etree
   from helpers import download
   class Wikipedia:
        TOKEN_REGEX = re.compile(r'[A-Za-z]+|[!?.:,()]')
        def __init__(self, url, cache_dir, vocabulary_size=10000):
            self._cache_dir = os.path.expanduser(cache_dir)
            self._pages_path = os.path.join(self._cache_dir, 'pages.bz2')
            self._vocabulary_path = os.path.join(self._cache_dir, 'vocabulary.bz2')
            if not os.path.isfile(self._pages_path):
                print('Read pages')
                self._read_pages(url)
            if not os.path.isfile(self._vocabulary_path):
                print('Build vocabulary')
                self._build_vocabulary(vocabulary_size)
            with bz2.open(self._vocabulary_path, 'rt') as vocabulary:
                print('Read vocabulary')
                self._vocabulary = [x.strip() for x in vocabulary]
            self._indices = {x: i for i, x in enumerate(self._vocabulary)}
        def __iter__(self):
            with bz2.open(self._pages_path, 'rt') as pages:
                for page in pages:
                    words = page.strip().split()
                    words = [self.encode(x) for x in words]
                    yield words
        @property
        def vocabulary_size(self):
            return len(self._vocabulary)
        def encode(self, word):
            return self._indices.get(word, 0)
        def decode(self, index):
            return self._vocabulary[index]
        def _read_pages(self, url):
            wikipedia_path = download(url, self._cache_dir)
            with bz2.open(wikipedia_path) as wikipedia, \
                bz2.open(self._pages_path, 'wt') as pages:
                for _, element in etree.iterparse(wikipedia, tag='{*}page'):
                    if element.find('./{*}redirect') is not None:
                        continue
                    page = element.findtext('./{*}revision/{*}text')
                    words = self._tokenize(page)
                    pages.write(' '.join(words) + '\n')
                    element.clear()
        def _build_vocabulary(self, vocabulary_size):
            counter = collections.Counter()
            with bz2.open(self._pages_path, 'rt') as pages:
                for page in pages:
                    words = page.strip().split()
                    counter.update(words)
            common = [''] + counter.most_common(vocabulary_size - 1)
            common = [x[0] for x in common]
            with bz2.open(self._vocabulary_path, 'wt') as vocabulary:
                for word in common:
                    vocabulary.write(word + '\n')
        @classmethod
        def _tokenize(cls, page):
            words = cls.TOKEN_REGEX.findall(page)
            words = [x.lower() for x in words]
            return words

import tensorflow as tf
   import numpy as np
   from helpers import lazy_property
   class EmbeddingModel:
        def __init__(self, data, target, params):
            self.data = data
            self.target = target
            self.params = params
            self.embeddings
            self.cost
            self.optimize
        @lazy_property
        def embeddings(self):
            initial = tf.random_uniform(
                [self.params.vocabulary_size, self.params.embedding_size],
                -1.0, 1.0)
            return tf.Variable(initial)
        @lazy_property
        def optimize(self):
            optimizer = tf.train.MomentumOptimizer(
                self.params.learning_rate, self.params.momentum)
            return optimizer.minimize(self.cost)
        @lazy_property
        def cost(self):
            embedded = tf.nn.embedding_lookup(self.embeddings, self.data)
            weight = tf.Variable(tf.truncated_normal(
                [self.params.vocabulary_size, self.params.embedding_size],
                stddev=1.0 / self.params.embedding_size ** 0.5))
            bias = tf.Variable(tf.zeros([self.params.vocabulary_size]))
            target = tf.expand_dims(self.target, 1)
            return tf.reduce_mean(tf.nn.nce_loss(
                weight, bias, embedded, target,
                self.params.contrastive_examples,
                self.params.vocabulary_size))

import collections
   import tensorflow as tf
   import numpy as np
   from batched import batched
   from EmbeddingModel import EmbeddingModel
   from skipgrams import skipgrams
   from Wikipedia import Wikipedia
   from helpers import AttrDict
   WIKI_DOWNLOAD_DIR = './wikipedia'
   params = AttrDict(
        vocabulary_size=10000,
        max_context=10,
        embedding_size=200,
        contrastive_examples=100,
        learning_rate=0.5,
        momentum=0.5,
        batch_size=1000,
   )
   data = tf.placeholder(tf.int32, [None])
   target = tf.placeholder(tf.int32, [None])
   model = EmbeddingModel(data, target, params)
   corpus = Wikipedia(
        'https://dumps.wikimedia.org/enwiki/20160501/'
        'enwiki-20160501-pages-meta-current1.xml-p000000010p000030303.bz2',
        WIKI_DOWNLOAD_DIR,
        params.vocabulary_size)
   examples = skipgrams(corpus, params.max_context)
   batches = batched(examples, params.batch_size)
   sess = tf.Session()
   sess.run(tf.initialize_all_variables())
   average = collections.deque(maxlen=100)
   for index, batch in enumerate(batches):
        feed_dict = {data: batch[0], target: batch[1]}
        cost, _ = sess.run([model.cost, model.optimize], feed_dict)
        average.append(cost)
        print('{}: {:5.1f}'.format(index + 1, sum(average) / len(average)))
        if index > 100000:
            break
   embeddings = sess.run(model.embeddings)
   np.save(WIKI_DOWNLOAD_DIR + '/embeddings.npy', embeddings)

The above is the detailed content of Detailed example of word vector embedding. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Python vs. C  : Learning Curves and Ease of UsePython vs. C : Learning Curves and Ease of UseApr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python vs. C  : Memory Management and ControlPython vs. C : Memory Management and ControlApr 19, 2025 am 12:17 AM

Python and C have significant differences in memory management and control. 1. Python uses automatic memory management, based on reference counting and garbage collection, simplifying the work of programmers. 2.C requires manual management of memory, providing more control but increasing complexity and error risk. Which language to choose should be based on project requirements and team technology stack.

Python for Scientific Computing: A Detailed LookPython for Scientific Computing: A Detailed LookApr 19, 2025 am 12:15 AM

Python's applications in scientific computing include data analysis, machine learning, numerical simulation and visualization. 1.Numpy provides efficient multi-dimensional arrays and mathematical functions. 2. SciPy extends Numpy functionality and provides optimization and linear algebra tools. 3. Pandas is used for data processing and analysis. 4.Matplotlib is used to generate various graphs and visual results.

Python and C  : Finding the Right ToolPython and C : Finding the Right ToolApr 19, 2025 am 12:04 AM

Whether to choose Python or C depends on project requirements: 1) Python is suitable for rapid development, data science, and scripting because of its concise syntax and rich libraries; 2) C is suitable for scenarios that require high performance and underlying control, such as system programming and game development, because of its compilation and manual memory management.

Python for Data Science and Machine LearningPython for Data Science and Machine LearningApr 19, 2025 am 12:02 AM

Python is widely used in data science and machine learning, mainly relying on its simplicity and a powerful library ecosystem. 1) Pandas is used for data processing and analysis, 2) Numpy provides efficient numerical calculations, and 3) Scikit-learn is used for machine learning model construction and optimization, these libraries make Python an ideal tool for data science and machine learning.

Learning Python: Is 2 Hours of Daily Study Sufficient?Learning Python: Is 2 Hours of Daily Study Sufficient?Apr 18, 2025 am 12:22 AM

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Python for Web Development: Key ApplicationsPython for Web Development: Key ApplicationsApr 18, 2025 am 12:20 AM

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code

Python vs. C  : Exploring Performance and EfficiencyPython vs. C : Exploring Performance and EfficiencyApr 18, 2025 am 12:20 AM

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)