Hi, I am newbie in rocksdb.
I tried to do some tests on rocksdb’s memory usage.
by set using block cache or not, modify block cache size, write buffer number, etc…
but server’s memory usage did not change compared to the default settings.
It only uses about 2.5gi from 8gi memory and seems to be using disk capacity since then. I want to freely control memory usage.
Please check my code if I missed anything.
thanks in advanced
RocksDB.loadLibrary()
val cacheSize = cashSizeMb * SizeUnit.MB
val cache = LRUCache(cacheSize)
val blockBasedTableConfig = BlockBasedTableConfig().apply {
if (enableBlockCache) {
// setBlockSize(256 * SizeUnit.KB)
// setBlockCacheSize(cacheSize)
setBlockCache(cache)
}
setNoBlockCache(!enableBlockCache)
}
val options = Options().apply {
setCreateIfMissing(true)
setWriteBufferManager(WriteBufferManager(320 * SizeUnit.MB, cache, true))
// setMaxWriteBufferNumber(15)
setTableFormatConfig(blockBasedTableConfig)
}
db = RocksDB.open(options, baseDir.absolutePath)
The size of memory consumed is hard to predict and depend on the configuration as well as the size and number of objects.
On each CF the dirty data (memory that has not flushed yet) may consume up to write_buffer_size multiplied by the max_write_buffers_number .
The clean data includes the block_cache and unless you set the option of cache_index_and_filter to true, which is the not the default, those blocks will be persistent in the heap.
The size of index is ruffly number of blocks multiply by the (avg key_size + 8 ); The size of filter is ruffly the number of objects multiplied by the number of objects…
If you use “cache_index_and_filter_blocks” you will suffer from somehow worse performance (about 10-15% assuming those fits into cache) but you will have a predictable cap on the size of heap memory.
There are small data structures that added to this accounting but those are very small and would not consume more than 1-2 MB of data in normal configurations .