比如说有一个微博的TID是1。 UID为1,2,3,4,5,6,7,8,9的用户都给这个微博点赞了。用redis缓存框架存储的话如何存储。微博可能有几十万个。如果用
key->set(value) 这种形式的话 key是微博ID的标示 value是 [1,2,3,4,5,6,7,8,9]这种形式,这样的话有多少个微博就有多少个K-V存储。我想知道这样会有什么弊端吗?或者有什么更好的方法吗?
怪我咯2017-04-22 09:01:23
Multiple HashSet storage can be used. Each Weibo is just a subkey in the HashSet. You can use the HIncrBy command to increase the number of likes. Divide the TID into blocks so that the keys in each HashSet do not exceed 100. The official documentation states that HashSet uses linear storage and scanning when the internal elements are less than 100, which is more efficient and saves memory compared with the tree structure at the same data scale.
For example: In the HashSet with TID 123456
的微博存在z:1234
, its key is 56. Assuming that the latest Weibo is also highly active, in most cases only a few HashSets are called, which is very friendly to the CPU cache.
If you want to manage users who like it, you can customize the data format. When the number of users is small, embed the entire user list into the value field of HashSet. After there are more than 50 users, for example, separate them into a Set and save the key of the Set in the HashSet. Example:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Since most Weibo users have fewer likes, HashSet can save a lot of global space keys (global keys consume more memory than HashSet keys).
Answer about @ sell your underwear and go online:
If in-place quicksort is used, the manual sorting efficiency of 50 users is very high, because at this data scale, the cache friendliness brought by compact storage of data is far better than the improvement brought by Redis ZSet compared to manual sorting. After the users who like it are promoted, they will automatically adapt to set or zset to ensure the time complexity of the algorithm. If you are still worried about efficiency, you can rewrite the sorted UID list back into a value of the HashSet, and use it directly if there is no data change in the future.
Whether to use set or zset still depends on the needs of the poster. The complexity of adding a member to set is O(1), and to zset is O(log N), but set has no sorting function.
大家讲道理2017-04-22 09:01:23
It is not recommended that LS use HASH to store like data. Because there is no way to sort (if necessary. I think it must be necessary)
Currently this is how we handle it.
You can use ZSET ordered sets for storage. Theoretically speaking, in a ZSET, there is no number within 100,000. In other words, the number of people who like a Weibo is within 100,000 (this is impossible).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
Well, please post it again!!!
If you need to use a database like NOSQL to store data like Weibo, you can store it like this:).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
PS it again, Weibo comments are stored in a similar way. Just need to agree on the name of $redis KEYS. For example:
c:<Comment ID> How to associate it with Weibo data like this:
t:$tid:comments:scores (ZSET timestmap comment ID);
It is much more convenient to use PIPELINE when retrieving data.
Finally, the KEY of databases like NOSQL must be set up.
怪我咯2017-04-22 09:01:23
Is it necessary to save each uid, or do you think this is what Sina Weibo does? In most cases, everyone only pays attention to a number. If this is the case, then just use a number to store {tid->count}
If you have to save, it is recommended to use {tid->set(uid)} to save
An optimization is that you can set a threshold. For example, if more than 100 people like it, you will no longer add anything to it, but only add a number (of course you need to save another {tid->count}) . Because there are more than 10,000 likes on Weibo, no one goes back to click on everyone who likes it one by one. .