P粉7978557902023-09-01 15:45:09
Viewthis answer. Apparently this problem can occur if you set the maximum expiration time of the session to be larger than memcached's expiration limit. In that post, the OP solved the problem by fixing the following configuration variables, which you can try:
define('SESSION_TIME_OUT', x); ini_set('session.gc_maxlifetime', SESSION_TIME_OUT); ini_set('session.cache_expire', SESSION_TIME_OUT); session_start();
Another option is to remove memcached
and use a memory-resident sqlite3
database instead of the session store, I don't think the performance on production will be much different between these two situation.
P粉7627302052023-09-01 11:17:20
If you are using an AWS ElastiCache Memcached cluster, check the endpoint you used in your configuration $config['sess_save_path']
. One option is to use the configuration endpoint (which contains .cfg.
) or the individual node endpoint (which contains .0001.
, .0002.
wait). If you use the configuration endpoint, make sure autodiscovery is enabled (requires additional installation on the server - ElastiCache Cluster Client for PHP). If not enabled, your nodes will not resolve correctly, causing issues like this.
It turns out that this is the case for me. I tried logging messages on sessions start, regenerate and destroy and with the file driver the regenerate happens, while with memcached it doesn't even call Any function except session_start()
. After some investigation, I decided to recheck the host and stumbled upon this guide in AWS. It turns out that when the issue started, the second node was added to our Memcached cluster, but we had been using the configuration endpoint without setting this Autodiscover. I'm not sure how the setup works at all. So I changed $config['sess_save_path']
to the endpoint of one of the nodes and the problem went away. This solution should work until I install and setup the required modules, and as long as the node is unchanged.