Reindexing taking a long time

We currently have ~12 million docs running on our production cluster as follows:
3 total server nodes, c5.4xlarge: (1 data, 1 data/index, 1 data/query)
2 total sync gateways in a load balancer, both c5.4xlarge

When inserting a decent chunk of data via the SG API (~500k docs) the indexing seems to take longer and longer. We recently upped all our instances from m4.xlarges but the re-indexing seems to be snowballing as we put more and more data into the server.

It has been doing this for close to an hour and only has reached 4%. Both of the sync_gateway logs show logs along the line of this:

2018-07-10T12:33:01.413Z Timeout waiting for view “role_access” to be ready for bucket “data” - retrying…
2018-07-10T12:33:01.413Z Timeout waiting for view “access” to be ready for bucket “data” - retrying…
2018-07-10T12:39:03.515Z Timeout waiting for view “channels” to be ready for bucket “data” - retrying…

I’m not quite sure what else we can do at this point to expedite this process. Let me know if you need any other information

Edit:

I went over to the logs in the server UI and I see repetition of some sort of the following:

Compactor for view data/_design/sync_gateway_2.0 (pid [{type,view},
{name,
<<“data/_design/sync_gateway_2.0”>>},
{important,false},
{fa,
{#Fun<compaction_new_daemon.25.86110551>,
[<<“data”>>,
<<"_design/sync_gateway_2.0">>,
{config,
{30,undefined},
{30,undefined},
undefined,false,
false,
{daemon_config,
30,131072,
20971520}},
false,
{[{type,
bucket}]}]}}]) terminated unexpectedly (ignoring this): {not_enough_space,
<<“data”>>,
<<"_design/sync_gateway_2.0">>,
21399369762,
7388863488}

So I increased the size of the volume in use by the indexer instance, but its still going really slow. Any other ideas?

We have the same probleme on our production bucket, any news about it ?