I’m wondering how reliable is this metric in 7.3.43 because I was looking at two storage servers yesterday and they showed very different CPU consumption (ie. 75% vs ~5%) but had readOpsPerSec
somewhat similar:
Low CPU:
[
{
"begin" : "",
"bytes" : 294906875,
"end" : "\u00FF",
"readBytesPerSec" : 3333.3333333333335,
"readOpsPerSec" : 4666700
}
]
High CPU:
[
{
"begin" : "",
"bytes" : 18922463868,
"end" : "\u00FF\u0002/<snip>",
"readBytesPerSec" : 414166.66666666669,
"readOpsPerSec" : 4011000
}
]
If we look at the bytesPerSec the difference in CPU seems to make sense one is barely pumping data and the other one is pushing 100 times more but from the readOps it didn’t seems to make sense.
Could it be a bug ?