search

Home  >  Q&A  >  body text

java - 高并发的后台如何处理数据库的更新与插入操作

今天忽然想到一个问题
对于高并发条件下,后台对数据的更新和实例化是怎么进行的?

1、直接插入数据库,然后更新缓存?
这样这个更新操作将会有IO阻塞风险吧、

2、直接更新缓存,然后使用消息队列更新数据库?
只能延缓IO阻塞,并不能避免

3、直接更新缓存,定时批量更新数据库
这样IO的问题是解决了,但是数据在缓存里感觉很慌。

也没实践过真正的高并发系统,这种情况怎么解决?

----------补充----------

总结一下

就是已知直接插入和更新数据库将面临IO阻塞的风险,那么将数据最终实例化到数据库的过程是怎么样的。

大家讲道理大家讲道理2806 days ago1328

reply all(7)I'll reply

  • 天蓬老师

    天蓬老师2017-04-18 10:25:14

    Intermediate pool, cache, redis, that’s what it means anyway;
    If the pool is blocked, just + processing unit; this is like a small skinny man with a small mouth of a cherry, eating from one pot, no matter how big the bowl is, then find Ten skinny guys with small cherry mouths or find a big fat guy with an open mouth;

    Update:

    如果真的高并发的话,首先先确定高并发是持久还是偶尔暴涨的;
    1,如果暴涨的话,使用二级池还是可以缓解的;具体实现:应用=>redis=>数据库三层方式解决;一般推荐redis与数据库可或两台处理;应用一台;加一台分发+逆向代理或动静态分流;实现应用+数据的双层池;1*Reverse&HTTP+1*Redis+1*DataBase
    
    2,如果持久高并发,应结合您应用拓扑结构上,考虑使用集群从逻辑、应用层、分层降低单点的应用压力;具体实现:
        1,同大1,特点简单;1*Reverse+n*HTTP+n/y*Redis+N/x*DataBase+1*MainDataBase;
        2,应用模块化;单台服务为应用+数据库,前端使用逆向代理实现负载平衡,后端使用一台服务器实现全数据同步;特点,比较粗暴适合应用全部逻辑层都负载很平均都很高的情况;劣势是,可能为了适应负载点高部分,负载点低部分的性能会浪费掉;1*Reverse+n*Service+1*MainDataBase;
        3,根据应用的逻辑分层,例如用户+逻辑1+逻辑2+订单+论坛这种方法,每种逻辑一台服务器;这种分发需要比较细致、全面的负载测试;但是再次大规模拓容比较费力;而且需要很强的负载预估能力;具体实现同2,只是中间部分每台服务器仅负担应用单层负载;特点,可以很精确的适应应用;劣势是:忒复杂;1*Reverse+N*1Service+N*2Service+N*3Service...+N*Redis+N*DataBase+*MainDataBase;
        
        以上几种方法,个人认为应从实际出发选择适合自己的拓容方式;只有优劣选择,没有孰好孰坏;仅此...**楼歪了~怎么说道负载均衡了?**

    reply
    0
  • 黄舟

    黄舟2017-04-18 10:25:14

    It cannot be avoided directly when writing operations. If you always consider "extreme cases", you will miss the point of the problem.

    reply
    0
  • 高洛峰

    高洛峰2017-04-18 10:25:14

    First write to the cache, and then persist to the disk. Is the author worried that the cache server will hang up and cause data loss?
    In general production environments, whether it is an application server, database server or cache server, it will not and should not be a stand-alone server. At least it is a master-slave structure. If one server fails, the other one will support it. When the load is normal, In this case, the probability of a single machine being down is not very high, let alone two machines being down at the same time, and Redis can be used as a cluster, so if the deployment is normal, the possibility of this problem is quite low.
    The original poster can go and find out moreCAP理论,根据你的描述你想要达到的效果和CAPThe explanations are similar, there is no absolutely perfect solution.

    reply
    0
  • 巴扎黑

    巴扎黑2017-04-18 10:25:14

    Thanks for the invitation, but I don’t have much experience in dealing with this problem, so I can only talk about it theoretically

    First of all, adding cache is inevitable. The cache directory is to temporarily store things that cannot be processed to extend the waiting time. For example, a sudden 10-minute high concurrency causes an accumulation of problems that need to be dealt with. Through caching, the 10-minute content can be processed in half an hour. Of course, there is an assumption here, that is, after 10 minutes of high concurrency, there will not be too many problems that need to be dealt with.

    Then the question is, what if the subsequent inflow speed is still very high and cannot be handled at all? I just learned a word recently, backpressure. The most direct way to deal with backpressure is to discard part of the content. Of course, for data, you definitely don’t want to discard it, so you can only think of ways from the perspective of processing efficiency, so use a lot of concurrent processing technologies such as expansion, clustering, and offloading

    The above are personal understandings, using colloquial words, not professional enough, for reference only

    reply
    0
  • ringa_lee

    ringa_lee2017-04-18 10:25:14

    This is a big problem, and there are different solutions for high concurrency in different scenarios.

    For example, Weibo is highly concurrency, and the financial system is also highly concurrency. The former is not a big problem even if information is lost, while the latter has strict requirements for information persistence.

    Also, is this high-concurrency reading or high-concurrency writing?

    Is it high concurrency within a certain period of time or sustained high concurrency?

    How can people answer if the prerequisites are not stated?

    reply
    0
  • PHPz

    PHPz2017-04-18 10:25:14

    This is actually a good question.
    If it is a read operation, there should be no problem, because for most current databases, the data is mostly updated when written (Copy on Write), because most of the current mainstream databases can already perform high-level operations at the database level. Concurrency supports concurrent reading. The key is how to ensure data consistency for write operations and ensure that the cache is also updated when the data is updated. This is guaranteed by the ACID theory of the database.
    For write operations, the ACID of the current Internet database is especially consistent. It talks about the ultimate consistency of the data, rather than the traditional consistency. This can ensure that user requests can be responded to quickly. This article is an introduction to CAP, http://blog.csdn.net/starxu85... For reference
    Your question actually corresponds to the "Double Eleven" flash sale, where thousands or more people go When trying to snap up a product, if the database writes frequently, it will inevitably lead to locking tables and blocking other operations. Fortunately, some databases have performance optimizations for this scenario (such as the new open source AliSQL, which is based on MySQL and optimized for flash sales). These requests can be message queued in the cache of the database, and then updated once to ensure high concurrency. Of course, there are other theoretical supports in the middle, such as high and low water levels, thread pools, etc.
    The above is my humble opinion. I am not a professional DBA. If there are theoretical errors, you are welcome to add to them.

    reply
    0
  • 伊谢尔伦

    伊谢尔伦2017-04-18 10:25:14

    This problem of the poster is common in most applications, and it has little to do with whether the system has high concurrency. I have thought about similar problems before
    A brief talk about caching (2)
    First of all, let’s talk about caching. Our program is in In most cases, there should actually be a "weak dependence" on the cache. How to understand this "weak dependence"?
    My understanding is that in most cases, the correctness of the data of our program will not It is judged by the data in the cache. For example, the "price of the product on the product details page" is actually the data in the cache, but the total price when we generate the order must be checked in the database. Of course, what I am talking about is In most cases, but there are a few cases where data correctness is not so strict (for example, abstraction, the number of consolation prizes can be loaded into the cache in advance, and subsequent business needs can be calculated directly through the number of prizes in the cache, because according to general business , the consolation prizes are some additional benefits that are beneficial to the merchants, such as 5 yuan off for purchases over 100. Even if the cache fails at this time, refreshing it again will have little impact on the merchants)

    Is there any way to ensure strong consistency between the database and cache? This process involves distributed transactions, and both parties implement the X/Open It requires a long time to lock resources, so this method is not recommended (eventual consistency may also cause data inconsistency for a period of time, which will greatly increase the complexity of redis)redis

    So in common situations, we usually update the cache synchronously/invalidate/asynchronously after ensuring that the database data is correct

    reply
    0
  • Cancelreply