Home > Article > Backend Development > How to use haystack with Django in python: an example of the full-text search framework
The following editor will bring you an article on using haystack with python django: full-text search framework (explanation with examples). The editor thinks it’s pretty good, so I’ll share it with you now and give it as a reference. Let’s follow the editor and take a look.
haystack: A framework for full-text retrieval
whoosh: written in pure Python Full-text search engine
jieba: a free Chinese word segmentation package
First install These three packages
pip install django-haystack
pip install whoosh
pip install jieba
1. Modify the settings.py file and install the application haystack ,
2. Configure the search engine in the settings.py file
HAYSTACK_CONNECTIONS = { 'default': { # 使用whoosh引擎 'ENGINE': 'haystack.backends.whoosh_cn_backend.WhooshEngine', # 索引文件路径 'PATH': os.path.join(BASE_DIR, 'whoosh_index'), } } # 当添加、修改、删除数据时,自动生成索引 HAYSTACK_SIGNAL_PROCESSOR = 'haystack.signals.RealtimeSignalProcessor'
3. Create the "search/indexes/blog/" directory under the templates directory Create a file blog_text.txt using the name of the blog application
#Specify the attributes of the index
{{ object.title }}
{{ object.text}}
{{ object.keywords }}
#4. Create search_indexes
from haystack import indexes from models import Post #指定对于某个类的某些数据建立索引 class GoodsInfoIndex(indexes.SearchIndex, indexes.Indexable): text = indexes.CharField(document=True, use_template=True) def get_model(self): return Post #搜索的模型类 def index_queryset(self, using=None): return self.get_model().objects.all()# under the application that needs to be searched ##5.
import jieba from whoosh.analysis import Tokenizer, Token class ChineseTokenizer(Tokenizer): def __call__(self, value, positions=False, chars=False, keeporiginal=False, removestops=True, start_pos=0, start_char=0, mode='', **kwargs): t = Token(positions, chars, removestops=removestops, mode=mode, **kwargs) seglist = jieba.cut(value, cut_all=True) for w in seglist: t.original = t.text = w t.boost = 1.0 if positions: t.pos = start_pos + value.find(w) if chars: t.startchar = start_char + value.find(w) t.endchar = start_char + value.find(w) + len(w) yield t def ChineseAnalyzer(): return ChineseTokenizer()6.1. Copy the whoosh_backend.py file and change it to the following namewhoosh_cn_backend.pyImport the Chinese word segmentation module into the copied filefrom .ChineseAnalyzer import ChineseAnalyzer2. Change the word analysis class to ChineseFind analyzer=StemmingAnalyzer() and change it to analyzer=ChineseAnalyzer()7 . The last step is to create initial index datapython manage.py rebuild_index8. Create a search template and create a search.html template in templates/indexes/Search results are paginated. , the context passed by the view to the template is as follows query: search keyword
class GoodsSearchView(SearchView): def get_context_data(self, *args, **kwargs): context = super().get_context_data(*args, **kwargs) context['iscart']=1 context['qwjs']=2 return contextAdd this url to the urls file of the application and use the class as a view method. as_view()
url('^search/$', views.BlogSearchView.as_view())
The above is the detailed content of How to use haystack with Django in python: an example of the full-text search framework. For more information, please follow other related articles on the PHP Chinese website!