PHP SDK 2.0.7 persistent connections die after 5 seconds

We are testing out Couchbase and I was surprised when an initial request took > 500ms to succeed. I verified using the command-line client (cbc) and see that in fact creating a connection is that expensive (is this normal, btw? See trace below). In PHP, an initial request is slow. Subsequent requests issued in quick succession are faster (40-80ms) but if the persistent connection is not used for 5 seconds, the next query will incur the connection overhead again. Is this expected? Is there any way to change the persistent connection timeout or to support keeping the connection alive indefinitely so we can avoid the connection startup overhead?

# time cbc cat -U couchbase://10.20.1.128,10.20.1.115,10.20.1.129/test-data -u test-data -P x -v session1
0ms [I0] {8258} [INFO] (instance - L:374) Version=2.5.1, Changeset=54ea1fa2e7bd5fef34450d09f2677bb53b6e62ea
0ms [I0] {8258} [INFO] (instance - L:375) Effective connection string: couchbase://10.20.1.128,10.20.1.115,10.20.1.129/test-data?username=test-data&console_log_level=2&. Bucket=test-data
0ms [I0] {8258} [INFO] (cccp - L:118) Requesting connection to node 10.20.1.128:11210 for CCCP configuration
0ms [I0] {8258} [INFO] (connection - L:450) <10.20.1.128:11210> (SOCK=0x25f76e0) Starting. Timeout=2000000us
118ms [I0] {8258} [INFO] (connection - L:116) <10.20.1.128:11210> (SOCK=0x25f76e0) Connected
336ms [I0] {8258} [INFO] (lcbio_mgr - L:491) <10.20.1.128:11210> (HE=0x25f70a0) Placing socket back into the pool. I=0x25f75e0,C=0x25f76e0
337ms [I0] {8258} [INFO] (confmon - L:174) Setting new configuration. Received via CCCP
337ms [I0] {8258} [INFO] (connection - L:450) <10.20.1.129:11210> (SOCK=0x26003a0) Starting. Timeout=2500000us
392ms [I0] {8258} [INFO] (connection - L:116) <10.20.1.129:11210> (SOCK=0x26003a0) Connected
session1             CAS=0x14d501a24b080000, Flags=0x2000006. Size=42
{"foo":"bar"}

real	0m0.614s
user	0m0.004s
sys	0m0.003s

Hey waynerev,

The PHP SDK 2.0 automatically attempts to maintain persistent connections in the background, but it is only able to do this when PHP is running in a persistent manner (PHP instance survives beyond a single page request). The trick is that you must be running PHP with fastcgi (with a sufficiently high max_request_count) or mod_php.

Cheers, Brett

Hi Brett, thanks for the reply - we’re using Apache + mod_php5 and other things like MySQL are able to maintain persistent connection pools without any issues. Is there something else, possibly configuration related, that I should be looking into, or some way to enable additional debugging so we can see what is actually happening with the pconns to the Couchbase server? Thanks again for the assistance.

Hey waynerev,

You could enable libcouchbase logging to get an idea of what is happening under the hood, but I don’t think this would allow you to see any problems with the particular aspect you are looking at. Are you passing an identical connection string to the PHP calls to new CouchbaseCluster? This is how we map multiple PHP cluster instances to the persistent connections underneath.

Cheers, Brett

Yes - I actually use the same test code and hit via HTTP. If more than 5s elapses, I see the connection overhead again. The script is quite simple:

$s1 = microtime(true);
$cluster = new CouchbaseCluster('couchbase://10.20.1.128,10.20.1.115,10.20.1.129');
$d1 = microtime(true) - $s1;
$s2 = microtime(true);
$bucket = $cluster->openBucket('test-data', 'x');
$d2 = microtime(true) - $s2;

$s3 = microtime(true);
//var_dump($bucket->upsert('session1', array('foo' => 'bar')));
var_dump($bucket->get('session1'));
$d3 = microtime(true) - $s3;

var_dump($d1);
var_dump($d2);
var_dump($d3);

I also tried the v1 API :slight_smile: Same result. I originally tested with libcouchbase 2.5.0 but then also tried it with the most recent 2.5.1. In quickly scanning the module code, I didn’t see any constants or references to anything that would be 5s - is there something in libcouchbase that could be contributing to this behavior?

Hey waynerev,

This sounds like some kind of network or firewall related configuration. We do not see this behaviour exhibited during normal use of the client. You may want to consider checking with your network administrator about any configurations that may cause this.

Cheers, Brett