Troubles scaling up the cluster

Running status is certainly expensive and can saturate the cluster controller if done a lot (although this is made a lot better in 6.0), but I don’t think it would lead to any cluster damage.

The missing data report here is possibly misleading. If you’ve added back all of your storage servers and there was no missing data before the problem, then my guess is that the data is all present. However, while adding back the storage servers slowly, there would have been a period where some data was missing, and you would get this message. Do to a reporting quirk, this status can persist even when all data does show up if there is a lot of data movement queued up. Can you let it run for a bit until data movement finishes and see if it becomes happy? Alternatively, I think there are some logs we can check to see if things are in an ok state, but I’ll have to wait until I’m able to look up the events to provide details.