>데이터 베이스 >MySQL 튜토리얼 >Compressing Large Data Sets in Redis With Gzip

Compressing Large Data Sets in Redis With Gzip

WBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWB
WBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWB원래의
2016-06-07 16:30:031093검색

Compressing Large Data Sets in Redis With Gzip: When publishing it, the post dropped the quote and my comments. A long post analyzing different scenarios of compressing data stored in Redis using Gzip: Year and a half ago, I was working wi

Compressing Large Data Sets in Redis With Gzip:

When publishing it, the post dropped the quote and my comments.

A long post analyzing different scenarios of compressing data stored in Redis using Gzip:

Year and a half ago, I was working with a software that used Redis as a buffer to store large sets of text data. We had some bottlenecks there. 
One
of them was related to Redis and the large amount of data, that we had there
(large comparing to RAM amount). Since then, I’ve wanted to check if using
Gzip would be a big improvement or would it be just a next bottleneck (CPU).
Unfortunately I don’t have access to this software any more, that’s why I’ve
decided to create a simple test case just to check this matter.

If what’s important is the speed, I think algorithms like snappy and lzo are a better fit. If data density is important, then Zopfli is probably a better fit.

Original title and link: Compressing Large Data Sets in Redis With Gzip (NoSQL database?myNoSQL)

Compressing Large Data Sets in Redis With Gzip
성명:
본 글의 내용은 네티즌들의 자발적인 기여로 작성되었으며, 저작권은 원저작자에게 있습니다. 본 사이트는 이에 상응하는 법적 책임을 지지 않습니다. 표절이나 침해가 의심되는 콘텐츠를 발견한 경우 admin@php.cn으로 문의하세요.