GithubHelp home page GithubHelp logo

Comments (8)

adamretter avatar adamretter commented on April 28, 2024 3

@NNSJ The stacktrace shows that you called WiteBatch#clear() within your code function com.supcon.datawarehouse.client.RocksDbClient#batchPut(String, Map). It looks like you did that after calling WriteBatch#close()`. This will lead to a use-after-free error, i.e. a SIGSEGV.

All objects that inherit from RocksObject in the RocksDB Java API need to have #close() called on them when you are finished with them; this free's the underlying C++ object and is akin to the use of delete *ptr if you were using the RocksDB C++ API. One possible way of doing this is to use Java's try-with-resources syntax.

If you want to re-use the WriteBatch then you can call clear after each use, and then only close it when you are finally finished with it.

What's in there is the data that I wrote. I wonder why this data is always in memory

It could be that the data is in the Memtable of RocksDB... It is impossible to say without looking to see where the strings are within the used memory.

from rocksdb.

NNSJ avatar NNSJ commented on April 28, 2024 1

@adamretter Thank you very much. It's settled. I observe it for two days, stabilize it, and then close the issue

from rocksdb.

adamretter avatar adamretter commented on April 28, 2024

You would need to produce a heapdump and identify where the memory is being held onto. We need to be able to determine if it is on heap (Java) or off heap (C++).

The Java code itself does very little in the way of allocation, and mostly just provides access to the underlying C++ functionality.

from rocksdb.

NNSJ avatar NNSJ commented on April 28, 2024

@adamretter I am sure,it is off heap (C++),because java gc is nomal,The heap memory is only allocated 100M

heap1
no-heap

from rocksdb.

NNSJ avatar NNSJ commented on April 28, 2024

Batches write 10,000 data per second,The code is as follows:
WriteOptions writeOptions = new WriteOptions();
writeOptions.setDisableWAL(false);
WriteBatch writeBatch = new WriteBatch();
for (Map.Entry<String, String> entry : map.entrySet()) {
writeBatch.put(columnFamilyHandle, entry.getKey().getBytes(), entry.getValue().getBytes());
}
rocksDB.write(writeOptions, writeBatch);

from rocksdb.

adamretter avatar adamretter commented on April 28, 2024

@NNSJ are you closing the WriteBatch when you are done with it?

from rocksdb.

NNSJ avatar NNSJ commented on April 28, 2024

@adamretter
When I finished WriteBatch , I didn't close it. I just tried shutting down writebatch, and the following error occurred..

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x0000000000000045, pid=228368, tid=0x00007fd3f86ce700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_212-b10) (build 1.8.0_212-b10)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.212-b10 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  0x0000000000000045
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/gitlab/data-warehouse-rocksdb/hs_err_pid228368.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#

file /home/gitlab/data-warehouse-rocksdb/hs_err_pid228368.log is as follows:
hs_err_pid228368.log

The code is as follows:
ColumnFamilyHandle columnFamilyHandle = cfAddIfNotExist(cfName); //获取列族Handle

    WriteOptions writeOptions = new WriteOptions();

    //writeOptions.memtableInsertHintPerBatch()
    //启用预写日志
    writeOptions.setDisableWAL(false);
    //writeOptions.disableWAL(columnFamilyHandle);
    WriteBatch writeBatch = new WriteBatch();
    for (Map.Entry<String, String> entry : map.entrySet()) {
        writeBatch.put(columnFamilyHandle, entry.getKey().getBytes(), entry.getValue().getBytes());
    }
    rocksDB.write(writeOptions, writeBatch);
    writeBatch.close();

from rocksdb.

NNSJ avatar NNSJ commented on April 28, 2024

cat /proc/[rocksdbPid]/smaps > smpas.txt
Find excessive memory in smpas.txt,

      01bb6000-0940b000 rw-p 00000000 00:00 0                                  [heap]
      Size:             123220 kB
      KernelPageSize:        4 kB
      MMUPageSize:           4 kB
      Rss:              122512 kB
      Pss:              122512 kB
      Shared_Clean:          0 kB
      Shared_Dirty:          0 kB
      Private_Clean:         0 kB
      Private_Dirty:    122512 kB
      Referenced:       122512 kB
      Anonymous:        122512 kB
      LazyFree:              0 kB
      AnonHugePages:    118784 kB
      ShmemPmdMapped:        0 kB
      FilePmdMapped:         0 kB
      Shared_Hugetlb:        0 kB
      Private_Hugetlb:       0 kB

then dump memory:
(gdb) dump memory /tmp/20230104-01bb6000-0940b000 0x01bb6000 0x0940b000

  strings /tmp/20230104-01bb6000-0940b000 

What's in there is the data that I wrote. I wonder why this data is always in memory

linux pagecache is released manually, with little effect

from rocksdb.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.