Thank you @alloc and @panghy for the helpful suggestions!
In this post, I am lesser concerned about the JVM freeing up references to native pointers, but rather, I am trying to get a better understanding of the fast allocator pool size maintained in C library and any memory held by it over a long time (even after the transactions using that memory being closed).
Please refer to this post by @SteavedHams : fast-alloc . This talks about some long term memory pool maintained by fast-allocator, and transaction arenas borrow out memory from this pool and return the memory back to it on being closed.
This also clarifies that given a transaction limit of 10MB and a duration of 5 sec, there should be a very little chance of the fast-allocator pool being of a large size.
However, I wanted to get a better understanding of this and confirm the worst case size of this common pool in the client process. Specifically,
- Is the max pool size proportional to the number of concurrent write transactions (each tx may need up to 10 MB, and there can be 100s of such concurrent transactions)?
- Does the max pool size depend on the number of concurrent read operations? Each range-read operation can retrieve huge amounts of data in a single network call, and if data from each call needs to be buffered in this pool, then the pool needs to grow to accommodate the reads.
- If there is no cap on the worst-case size of this pool by default, then are there any suggestions to limit it to a threshold, or else, keep it minimal?
- How is the lifecycle of this pool maintained? How soon does it shrink back when unused?
It may be that I have misunderstood the behavior of fast-allocator pool in C library, and in that case, the entire post is irrelevant.