MapReduce - emitted result

I am fetching records from a bucket via the Map function and the results contain different types of records (Entity1, Entity2, Entiy3 and so on). Ex -

function (doc, meta) {

if(meta.id.split('::')[0] == 'Entity1' && doc.status == 'NEW-DOC' )

{
emit(‘Entity1’, {‘id’ :meta.id, ‘status’ : doc.status, ‘type’: doc.type} );
}
if(meta.id.split(’::’)[0] == ‘Entity2’ && doc.status == ‘OLD-DOC’ )
{
emit(‘Entity2’, {‘id’ :meta.id, ‘status’ : doc.status, ‘type’: doc.type} );
}
if(meta.id.split(’::’)[0] == ‘Entity3’ && doc.status == ‘DOC1’ )
{
emit(‘Entity3’, {‘id’ :meta.id, ‘status’ : doc.status, ‘type’: doc.type} );
}
}

The meta ID is like Entiy1::10101, Entity1::10102, Entity2::10101 and so on, so I have split the id and taken zeroth index as the key as it represents the entity; There could be n number of entities in the same bucket.

Reduce part- I am getting the emitted results in the values array as array of arrays. When using the count_ function, I am getting the count of records in the values array to be 1000, but when using values.length I am getting the length as 10 (It should have been 1000). I know this is happening because CB rereduces the array to multiple subarrays, but I need the entire array to iterate upon. When trying to iterate over values.length, my loop is ending at i=10 as values.length is 10. How could I iterate over the entire array at once?

@dipsrv, You can use something like this function as your custom reduce.
function(key, values, rereduce){ if(!rereduce) { return values; } else { var internArray = []; for (V in values) { internArray = internArray.concat(V) } return internArray; } }

This will create a new array which will be stored in the internal nodes of the b-tree. You can read more about custom reduce in the following article.
https://developer.couchbase.com/documentation/server/3.x/developer/dev-guide-3.0/reduce-rereduce.html

NOTE: There is a 64KB limit for the values returned by a reduce function. This is there to prevent indexes taking too long and/or growing too large.

Thanks,
Ankit