Comments (8)
@NNSJ The stacktrace shows that you called WiteBatch#clear()
within your code function com.supcon.datawarehouse.client.RocksDbClient#batchPut(String, Map)
. It looks like you did that after calling WriteBatch#close()`. This will lead to a use-after-free error, i.e. a SIGSEGV.
All objects that inherit from RocksObject
in the RocksDB Java API need to have #close()
called on them when you are finished with them; this free's the underlying C++ object and is akin to the use of delete *ptr
if you were using the RocksDB C++ API. One possible way of doing this is to use Java's try-with-resources syntax.
- See https://github.com/facebook/rocksdb/wiki/RocksJava-Basics#memory-management
- Also you can study the code of the RocksJava Unit Tests to see when and how close should be used - https://github.com/facebook/rocksdb/blob/main/java/src/test/java/org/rocksdb/WriteBatchTest.java
If you want to re-use the WriteBatch then you can call clear
after each use, and then only close it when you are finally finished with it.
What's in there is the data that I wrote. I wonder why this data is always in memory
It could be that the data is in the Memtable of RocksDB... It is impossible to say without looking to see where the strings are within the used memory.
from rocksdb.
@adamretter Thank you very much. It's settled. I observe it for two days, stabilize it, and then close the issue
from rocksdb.
You would need to produce a heapdump and identify where the memory is being held onto. We need to be able to determine if it is on heap (Java) or off heap (C++).
The Java code itself does very little in the way of allocation, and mostly just provides access to the underlying C++ functionality.
from rocksdb.
@adamretter I am sure,it is off heap (C++),because java gc is nomal,The heap memory is only allocated 100M
from rocksdb.
Batches write 10,000 data per second,The code is as follows:
WriteOptions writeOptions = new WriteOptions();
writeOptions.setDisableWAL(false);
WriteBatch writeBatch = new WriteBatch();
for (Map.Entry<String, String> entry : map.entrySet()) {
writeBatch.put(columnFamilyHandle, entry.getKey().getBytes(), entry.getValue().getBytes());
}
rocksDB.write(writeOptions, writeBatch);
from rocksdb.
@NNSJ are you closing the WriteBatch when you are done with it?
from rocksdb.
@adamretter
When I finished WriteBatch , I didn't close it. I just tried shutting down writebatch, and the following error occurred..
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000000000045, pid=228368, tid=0x00007fd3f86ce700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_212-b10) (build 1.8.0_212-b10)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.212-b10 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x0000000000000045
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/gitlab/data-warehouse-rocksdb/hs_err_pid228368.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
file /home/gitlab/data-warehouse-rocksdb/hs_err_pid228368.log is as follows:
hs_err_pid228368.log
The code is as follows:
ColumnFamilyHandle columnFamilyHandle = cfAddIfNotExist(cfName); //获取列族Handle
WriteOptions writeOptions = new WriteOptions();
//writeOptions.memtableInsertHintPerBatch()
//启用预写日志
writeOptions.setDisableWAL(false);
//writeOptions.disableWAL(columnFamilyHandle);
WriteBatch writeBatch = new WriteBatch();
for (Map.Entry<String, String> entry : map.entrySet()) {
writeBatch.put(columnFamilyHandle, entry.getKey().getBytes(), entry.getValue().getBytes());
}
rocksDB.write(writeOptions, writeBatch);
writeBatch.close();
from rocksdb.
cat /proc/[rocksdbPid]/smaps > smpas.txt
Find excessive memory in smpas.txt,
01bb6000-0940b000 rw-p 00000000 00:00 0 [heap]
Size: 123220 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 122512 kB
Pss: 122512 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 122512 kB
Referenced: 122512 kB
Anonymous: 122512 kB
LazyFree: 0 kB
AnonHugePages: 118784 kB
ShmemPmdMapped: 0 kB
FilePmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
then dump memory:
(gdb) dump memory /tmp/20230104-01bb6000-0940b000 0x01bb6000 0x0940b000
strings /tmp/20230104-01bb6000-0940b000
What's in there is the data that I wrote. I wonder why this data is always in memory
linux pagecache is released manually, with little effect
from rocksdb.
Related Issues (20)
- TransactionDB->CreateColumnFamilyWithImport doesn't create valid column family handles HOT 1
- Cache dumper could exit early
- rocksdb 9.0.0 fails to build on GCC 13.2.1 with `-march=x86-64-v3`
- Cache Dump all keys without filter
- checkpoint directory is empty when db is empty HOT 2
- [Java] In read-only mode can't get data from blob only if there is just one checkpoint with one entry HOT 11
- Solution to the periodic slowdown of GetUpdatesSnce
- Heading typo on wiki docs: PlainTable Format HOT 1
- does rocksdb provide any monitoring metrics? HOT 1
- Java release for 9.1 HOT 2
- segFault while write large data on multiple thread HOT 3
- Feature request: rate limit compaction triggered by periodic compaction seconds/ ttl only HOT 5
- The value of 'micros/op' is not equal to 1,000,000 divided by the value of 'ops/sec' HOT 4
- Question about CompactRange behavior with option atomic_flush=true
- Segfault During compaction using FIFO Compaction style for a single CF
- Safer shutdown behaviour by deafult HOT 2
- StdLogger truncating last letter in some cases HOT 1
- High Memory Usage/ LRU cache size is not being respected HOT 1
- rocksdb abnormal exit
- Feature request: log to stderr logger + LOG file
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rocksdb.