Pagination within document

I have a document structure like this:

{
  "startTime": 123,
  "endTime": 932345235,
  "type": "results",
  "validationResults": [
    {
      "name": "xyz",
      "ruleViolated": 1,
      "details": [
        {
          "timeStamp": 132345235,
          "place": "abc",
          "acknowledged": false
        },
        {
          "timeStamp": 7658,
          "place": "bcd",
          "acknowledged": true
        }
      ]
    },

    {
      "name": "xyz",
      "ruleViolated": 2,
      "details": [
        {
          "timeStamp": 132345235,
          "place": "abc",
          "acknowledged": false
        },
        {
          "timeStamp": 7658,
          "place": "bcd",
          "acknowledged": true
        }
      ]
    },

    {
      "name": "random",
      "ruleViolated": 1,
      "details": [
        {
          "timeStamp": 9679,
          "place": "abc",
          "acknowledged": false
        },
        {
          "timeStamp": 12412,
          "place": "bcd",
          "acknowledged": true
        }
      ]
    }

  ]
}

Here the startTime and endTime is the range for which the check was made while creating the doc.

I need to query results within a time frame such that I get a list with unique rules and their details combined together from multiple docs in that time range in a single array.

I used the following query to achieve it.

SELECT b.name, ARRAY_AGG(c) AS details
FROM bucketName AS a
UNNEST a.validationResults AS b
UNNEST b.details AS c
WHERE a.type="results" AND ((a.startTime <= 1462042800000 AND a.endTime >= 1462042800000)
OR (a.startTime >= 1462042800000 AND a.startTime <= 1462043025018))
AND (c.timeStamp BETWEEN 1462042800000 AND 1462043025018)
GROUP BY b.name;

But when the time range is large around 1 week then the document size increases so i need to get paginated results for each and every unique tags in the doc.

Is there a way to achieve it?

SELECT d.name, d.ndetails,
       (SELECT  RAW c FROM d.details AS c  ORDER BY c.timestamp DESC OFFSET 0 LIMIT 100) AS details
FROM ( SELECT 
          b.name, 
          COUNT(1) AS ndetails,
          ARRAY_AGG(c) AS details         
       FROM bucketName AS a
       UNNEST a.validationResults AS b
       UNNEST b.details AS c
       WHERE a.type="results" AND ((a.startTime <= 1462042800000 AND a.endTime >= 1462042800000)
                                 OR (a.startTime >= 1462042800000 AND a.startTime <= 1462043025018))
                       AND (c.timeStamp BETWEEN 1462042800000 AND 1462043025018)
        GROUP BY b.name) AS d;

Based of ndetails > offset+limit you have more entries then issue new query for that unique tag.

Thanks, works exactly as per need. But do I need to additional index to make the query faster? Currently, I have the following index on the bucket
CREATE INDEX idx_dqt ON bucketName(startTime,endTime) WHERE (type = “results”)

The query is complex and the index you have is right one