Home > Article > Backend Development > A large number of failed to acquire scoreboard appear in php-fpm log
There are a large number of failed to acquire scoreboard in the php-fpm log, which causes the child process to reach the maximum number of requests. After killing, it cannot start new child processes. In the end, the master process is left.
The log is as follows:
<code>ERROR: [pool ] no free scoreboard slot WARNING: [pool www] child 31311 said into stderr: "WARNING: failed to acquire proc scoreboard" </code>
After investigation, when this problem started to occur, a request showed 500. The reason why 500 appeared was because the result set of querying the database was relatively large. When this situation occurred, the system log also appeared
<code>TCP: time wait bucket table overflow </code>
Does anyone know the reason?
There are a large number of failed to acquire scoreboard in the php-fpm log, which causes the child process to reach the maximum number of requests. After killing, it cannot start new child processes. In the end, the master process is left.
The log is as follows:
<code>ERROR: [pool ] no free scoreboard slot WARNING: [pool www] child 31311 said into stderr: "WARNING: failed to acquire proc scoreboard" </code>
After investigation, when this problem started to occur, a request showed 500. The reason why 500 appeared was because the result set of querying the database was relatively large. When this situation occurred, the system log also appeared
<code>TCP: time wait bucket table overflow </code>
Does anyone know the reason?
WARNING: failed to acquire proc scoreboard
Is this problem caused by the fact that the data queried in the big data result set are stored in memory for operation, and then the server system has insufficient memory. time wait bucket table overflow
Due to a problem in processing requests, a large number of TCP
connections were caused, and the number of connections exceeded the system configured tcp_max_tw_buckets
maximum value.
Try the solution:
Increase system memory
Process large data result sets in chunks, or read large database result sets by rows (cursor mode)
Modify the related values of tcp_max_tw_buckets
The above is purely personal speculation and is for reference only, because I have never encountered this error
Check whether any script consumes a lot of memory