GithubHelp home page GithubHelp logo

dbx1000's People

Contributors

apavlo avatar foconrad avatar yujjjang avatar yxymit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbx1000's Issues

YCSB workload & table generation concerns

Hi Xiangyao,

I am writing to ask some concerns and questions I encountered when I built and ran DBx1000.

  1. I dumped tables generated from both init_table() and init_table_parallel() into disk files. In the README, it is said that there should be 10 additional columns with randomly generated 100 byte string. But the table from init_table_parallel() seems to only contain "hello" as the column value. And there are not 10 columns. Which table generation schemes should one choose? And are there 10 columns for this single table?

  2. Also it is claimed that YCSB workload is chosen to be one of the standards. However, the workloads are generated by Zipfian's law with R/W ratio from YCSB major workloads. Hence, the queries are not directly generated by YCSB right? In the "request" objects, the values are all one-byte chars. If such key/value pairs are inserted into the table, would that have an impact on original table schema?

  3. There are 32 buckets for the hash function and I understand each transaction & row gets their own Manager to handle rts/wts. But how are two read requests handled if they want to access the same bucket? If there are 10 million rows in this single table with 32 buckets, it means for each bucket it could contain up to three hundred thousands rows. And if rows can be accessed concurrently, by TicToc algorithm, how are buckets accessed? Wouldn't buckets have their own concurrency control methods?

Best,
YJ

VLL CC Algo

Implement Very Lightweight Locking algorithm

WAIT_DIE in conflict situation

Hello sir, I've looked the code about 2PL-WAIT_DIE.

if (en->txn->get_ts() < txn->get_ts()) {

T is the txn currently running. [If] T.ts > ts of all owners T can wait [ELSE] T should abort

This is the comment about "Can wait or not", but the code is implemented in opposite way.
Following the code, If current txn's timestamp is larger than the one of owners' timestamp
the current transaction will abort.

Thank you.

TIMESTAMP in TPC-C benchmarks

Hi, I have a question when i build and run DBx1000.

When i run 'rundb', CC_ALG=TIMESTAMP with TPC-C benchmark, I found the interested one.
when i change the PERC_PAYMENT value 0.5 to 0 or 0.5 to 1.0, It does work well. (it also works well in YCSB workload).
But the other case doesn't work. (the value 'PERC_PAYMENT' is not both 0.0 and 1.0)

My question is, is the error an algorithmic problem in TIMESTAMP or in other factor?
I've tried to find the error in code, but this problem is really not simple to me.
Thank you.

Commented out sections in TPCC

Hi, I'm looking at DBx1000's implementation of "new order" transactions in TPCC, and came across the following line:

// XXX district info is not inserted.

The commented-out section that comes below inserts to the order line table, which is required by TPCC. Is it a bug?

It also appears to me that the benchmarks in DBx1000 don't exercise insert_row() in any index structures, but the indexes themselves seem functional. Is this intended?

compile and run issue

I meet make infinite loop then I delete the -include line. Then I compiled it successfully.

But when I try to run, I meet Segmentation fault (core dumped)
Guess it is a problem with configuration?

Build Fails

make -j9
g++ -o rundb config.o benchmarks/ycsb_query.o benchmarks/ycsb_wl.o benchmarks/test_txn.o benchmarks/tpcc_query.o benchmarks/ycsb_txn.o benchmarks/tpcc_wl.o benchmarks/tpcc_txn.o benchmarks/tpcc_helper.o benchmarks/test_wl.o concurrency_control/row_hekaton.o concurrency_control/row_vll.o concurrency_control/dl_detect.o concurrency_control/row_ts.o concurrency_control/row_occ.o concurrency_control/occ.o concurrency_control/row_mvcc.o concurrency_control/vll.o concurrency_control/plock.o concurrency_control/row_silo.o concurrency_control/tictoc.o concurrency_control/row_lock.o concurrency_control/silo.o concurrency_control/hekaton.o concurrency_control/row_tictoc.o storage/catalog.o storage/index_hash.o storage/index_btree.o storage/row.o storage/table.o system/stats.o system/thread.o system/global.o system/manager.o system/txn.o system/helper.o system/wl.o system/parser.o system/main.o system/mem_alloc.o system/query.o -Wall -L. -L./libs -pthread -g -lrt -std=c++0x -O3 -ljemalloc -Wall -g -std=c++0x -I. -I./benchmarks -I./concurrency_control -I./storage -I./system -D NOGRAPHITE=1 -Werror -O3
/usr/bin/ld: ./libs/libjemalloc.a(jemalloc.o): relocation R_X86_64_32 against undefined hidden symbol `je_arenas_lock' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(arena.o): relocation R_X86_64_32 against hidden symbol `je_tcache_thread_cleanup' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(base.o): relocation R_X86_64_32 against `.bss' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(chunk.o): relocation R_X86_64_32 against undefined hidden symbol `je_chunks_mtx' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(chunk_dss.o): relocation R_X86_64_32 against `.bss' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(chunk_mmap.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(ctl.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(huge.o): relocation R_X86_64_32 against undefined hidden symbol `je_huge_mtx' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(mutex.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(prof.o): relocation R_X86_64_32 against undefined hidden symbol `je_opt_prof_prefix' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(quarantine.o): relocation R_X86_64_32S against undefined hidden symbol `je_arena_bin_info' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(stats.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(tcache.o): relocation R_X86_64_32S against undefined hidden symbol `je_arena_bin_info' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(util.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(tsd.o): relocation R_X86_64_32S against `.bss' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ./libs/libjemalloc.a(ckh.o): relocation R_X86_64_32 against hidden symbol `je_tcache_thread_cleanup' can not be used when making a PIE object
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status
Makefile:20: recipe for target 'rundb' failed
make: *** [rundb] Error 1

Can I know why this is happenning?

wirte local data or the row in table

in this function : RC ycsb_txn_man::run_txn(base_query * query),
to update the data, the row should be used local row and when commit, then write to the row place?
char * data = row->get_data();
shall be row_local ?

assert(req->rtype == WR);

// for (int fid = 0; fid < schema->get_field_cnt(); fid++) {
int fid = 0;
char * data = row->get_data();
*(uint64_t *)(&data[fid * 10]) = 0;
// }

Compilation error: strncpy truncates string

I get the following error during compilation:

benchmarks/tpcc_txn.cpp: In member function ‘RC tpcc_txn_man::run_payment(tpcc_query*)’:
benchmarks/tpcc_txn.cpp:211:9: error: ‘char* strncpy(char*, const char*, size_t)’ output may be truncated copying 10 bytes from a string of length 10 [-Werror=stringop-truncation]
  211 |  strncpy(h_data, w_name, 10);
      |  ~~~~~~~^~~~~~~~~~~~~~~~~~~~

I think that this is due to the fact that w_name is declared as 11 characters long (with the last character likely being NULL), but only the first 10 are copied here. Is this intended? It seems to me that, if the character array is not null-terminated, that can cause problems for the strlen call afterwards, even if it's not needed in the end because multiple strings are being joined.

problems using DBx1000

I had some compile issues when I tried to change configs to TPCC_Full and TPCC_Access_All. I also had some run-time issues when I used B-Tree index. I fixed all of them by slightly modifying the code. I thought it would be better to inform you guys in case you want to fix them for the future.

Stack-buffer-overflow in tpcc_wl::init_tab_wh()

row->set_value(W_ID, wid);

I believe that instead of
row->set_value(W_ID, wid);
we can have
int64_t w_id = wid;
row->set_value(W_ID, &w_id);

=================================================================
==122642==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fff229ece64 at pc 0x7ffa528e369a bp 0x7fff229ecde0 sp 0x7fff229ec588
READ of size 8 at 0x7fff229ece64 thread T0
#0 0x7ffa528e3699 in __interceptor_memcpy ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:823
#1 0x55f01952c9f2 in row_t::set_value(int, void*) storage/row.cpp:90
#2 0x55f01952cece in row_t::set_value(int, unsigned int) storage/row.cpp:106
#3 0x55f019502cab in tpcc_wl::init_tab_wh(unsigned int) benchmarks/tpcc_wl.cpp:119
#4 0x55f01950672d in tpcc_wl::threadInitWarehouse(void*) benchmarks/tpcc_wl.cpp:399
#5 0x55f019502240 in tpcc_wl::init_table() benchmarks/tpcc_wl.cpp:72
#6 0x55f019500689 in tpcc_wl::init() benchmarks/tpcc_wl.cpp:27
#7 0x55f019542a78 in main system/main.cpp:44
#8 0x7ffa523620b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#9 0x55f0194f9f2d in _start (/workspace/concurrency/code/2plsf/DBx1000/rundb+0x17f2d)

Address 0x7fff229ece64 is located in stack of thread T0 at offset 36 in frame
#0 0x55f01952ce25 in row_t::set_value(int, unsigned int) storage/row.cpp:106

This frame has 1 object(s):
[32, 36) 'value' (line 106) <== Memory access at offset 36 overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork
(longjmp and C++ exceptions are supported)
SUMMARY: AddressSanitizer: stack-buffer-overflow ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:823 in __interceptor_memcpy
Shadow bytes around the buggy address:
0x100064535970: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100064535980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100064535990: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x1000645359a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x1000645359b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x1000645359c0: 00 00 00 00 00 00 00 00 f1 f1 f1 f1[04]f3 f3 f3
0x1000645359d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1
0x1000645359e0: f1 f1 00 00 00 f2 00 00 00 f2 f2 f2 02 f2 00 01
0x1000645359f0: f2 f2 00 02 f2 f2 00 00 04 f3 f3 f3 f3 f3 00 00
0x100064535a00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100064535a10: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc

valgrind - Row_hekaton::prepare_read - use of uninitalised value

Configuration:

$ git diff
diff --git a/config.h b/config.h
index 34a59c1..0d1c7d6 100644
--- a/config.h
+++ b/config.h
@@ -16,7 +16,7 @@
 // # of transactions to run for warmup
 #define WARMUP                                         0
 // YCSB or TPCC
-#define WORKLOAD                                       YCSB
+#define WORKLOAD YCSB
 // print the transaction latency distribution
 #define PRT_LAT_DISTR                          false
 #define STATS_ENABLE                           true
@@ -39,7 +39,7 @@
 /***********************************************/
 // WAIT_DIE, NO_WAIT, DL_DETECT, TIMESTAMP, MVCC, HEKATON, HSTORE, OCC, VLL, TICTOC, SILO
 // TODO TIMESTAMP does not work at this moment
-#define CC_ALG                                                 TICTOC
+#define CC_ALG HEKATON
 #define ISOLATION_LEVEL                        SERIALIZABLE
 
 // all transactions acquire tuples according to the primary key order.
@@ -109,12 +109,12 @@
 // max number of rows touched per transaction
 #define MAX_ROW_PER_TXN                                64
 #define QUERY_INTVL                            1UL
-#define MAX_TXN_PER_PART                       100000
+#define MAX_TXN_PER_PART                       100
 #define FIRST_PART_LOCAL                       true
 #define MAX_TUPLE_SIZE                         1024 // in bytes
 // ==== [YCSB] ====
 #define INIT_PARALLELISM                       40
-#define SYNTH_TABLE_SIZE                       (1024 * 1024 * 10)
+#define SYNTH_TABLE_SIZE                       (1024 * 40)
 #define ZIPF_THETA                                     0.6
 #define READ_PERC                                      0.9
 #define WRITE_PERC                                     0.1
$ valgrind ./rundb 
==22672== Memcheck, a memory error detector
==22672== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==22672== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==22672== Command: ./rundb
==22672==
mem_allocator initialized!
workload initialized!
Query Queue Init Time 0.081279
query_queue initialized!
==22672== Thread 2:
==22672== Conditional jump or move depends on uninitialised value(s)
==22672==    at 0x40FE87: Row_hekaton::prepare_read(txn_man*, row_t*, unsigned long) (row_hekaton.cpp:161)
==22672==    by 0x40F72D: txn_man::validate_hekaton(RC) (hekaton.cpp:20)
==22672==    by 0x41658C: txn_man::finish(RC) (txn.cpp:224)
==22672==    by 0x41469D: thread_t::run() (thread.cpp:136)
==22672==    by 0x41B323: f(void*) (main.cpp:111)
==22672==    by 0x593E61A: start_thread (in /usr/lib64/libpthread-2.26.so)
==22672==    by 0x5C6D98E: clone (in /usr/lib64/libc-2.26.so)
==22672==
PASS! SimTime = 31703788

Speculative Execution

Implement spec ex for the HStore CC algo to enable single-partition transactions to execute out-of-order when a multi-partition transaction is completing.

When make the project in ubuntu 64-bit machine, encounter relocation error

Hi, when I try to run this project in ubuntu 64-bit machine, I found the issue like following:

/usr/bin/ld: ./libs/libjemalloc.a(jemalloc.o): relocation R_X86_64_32 against undefined hidden symbol `je_arenas_lock' can not be used when making a PIE object
/usr/bin/ld: ./libs/libjemalloc.a(arena.o): relocation R_X86_64_32 against hidden symbol `je_tcache_thread_cleanup' can not be used when making a PIE object

I believe it's the same issue as this one:
https://stackoverflow.com/questions/46123505/assembling-with-gcc-causes-weird-relocation-error-with-regards-to-data

I can fix the issue by adding -no-pie to Makefile.
Do you need me to fix the issue?

Inserts missing in TPC-C benchmark?

Dear authors of DBx1000,

I'm currently planning to follow up on your work on OLTP of 1000 cores.
Since I've got access to a multi-socket server with 1500 cores, I'd like to extend the insight for many-core systems from your paper to current multi-socket systems.

However, when browsing your code, I came across the following commented inserts in the TPC-C benchmark.

// insert_row(r_hist, _wl->t_history);

Could you explain which implementation (e.g., commit ref), I should use for follow-up work?

Best Regards,
Tiemo

Uninitialized next_tid in TPC-C

uint32_t next_tid;

next_tid is not initialized. this causes ASAN to barf because we try to write in a bad address when initializing tpcc_buffer[tid] in tpcc_wl::threadInitWarehouse()

AddressSanitizer:DEADLYSIGNAL

==120532==ERROR: AddressSanitizer: SEGV on unknown address 0x601df5f5f6e0 (pc 0x563e33d755b0 bp 0x7ffcae91f320 sp 0x7ffcae91f2e0 T0)
==120532==The signal is caused by a WRITE memory access.
#0 0x563e33d755b0 in tpcc_wl::threadInitWarehouse(void*) benchmarks/tpcc_wl.cpp:392
#1 0x563e33d71240 in tpcc_wl::init_table() benchmarks/tpcc_wl.cpp:72
#2 0x563e33d6f689 in tpcc_wl::init() benchmarks/tpcc_wl.cpp:27
#3 0x563e33db19fa in main system/main.cpp:44
#4 0x7f99185f40b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#5 0x563e33d68f2d in _start (/workspace/concurrency/code/2plsf/DBx1000/rundb+0x17f2d)

How to interpret the experiment result?

Hi all
I have some questions of the DBx1000 experiment result.
In my case, after the execution (TPCC or YCSB) the summary result are txn_cnt , abort_cnt, and run_time etc. run_time is really small (e.g. 0.0029 sec) .

  • typical TPC-C benchmark give the time parameter and evaluate the "tpm-C" and "TPS". in DBx1000 how tpm-C or throughput evaluates? (just txn_cnt / run_time?)

  • in the source code , why " *** time" (run_time, and abort_time ..etc) divided by "billion"?

thanks in advance !

SUT generating date and time in TPCC Benchmark

Dear authors,
Hello, I'm trying to implement the full schema of the TPCC Benchmark and I came across that some of the dates attributes like h_data is missing and some dates like o_entry_d are assigned as 2013. Does the date really matter in this simulation? I'm thinking about if I can use a fixed value for all dates.

Stack buffer overflow in tpcc_wl::init_tab_item()

row->set_primary_key(i);

I believe instead of
row->set_value(I_ID, i);
we can do:
uint64_t i_id = i;
row->set_value(I_ID, &i_id);

=================================================================
==122100==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffd6d837bf4 at pc 0x7f157087169a bp 0x7ffd6d837b70 sp 0x7ffd6d837318
READ of size 8 at 0x7ffd6d837bf4 thread T0
#0 0x7f1570871699 in __interceptor_memcpy ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:823
#1 0x55bec5e1998d in row_t::set_value(int, void*) storage/row.cpp:91
#2 0x55bec5e19e6a in row_t::set_value(int, unsigned int) storage/row.cpp:107
#3 0x55bec5def6a1 in tpcc_wl::init_tab_item() benchmarks/tpcc_wl.cpp:94
#4 0x55bec5df369e in tpcc_wl::threadInitWarehouse(void*) benchmarks/tpcc_wl.cpp:397
#5 0x55bec5def240 in tpcc_wl::init_table() benchmarks/tpcc_wl.cpp:72
#6 0x55bec5ded689 in tpcc_wl::init() benchmarks/tpcc_wl.cpp:27
#7 0x55bec5e2fa14 in main system/main.cpp:44
#8 0x7f15702f00b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#9 0x55bec5de6f2d in _start (/workspace/concurrency/code/2plsf/DBx1000/rundb+0x17f2d)

Address 0x7ffd6d837bf4 is located in stack of thread T0 at offset 36 in frame
#0 0x55bec5e19dc1 in row_t::set_value(int, unsigned int) storage/row.cpp:107

This frame has 1 object(s):
[32, 36) 'value' (line 107) <== Memory access at offset 36 overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork
(longjmp and C++ exceptions are supported)
SUMMARY: AddressSanitizer: stack-buffer-overflow ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:823 in __interceptor_memcpy
Shadow bytes around the buggy address:
0x10002dafef20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10002dafef30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10002dafef40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10002dafef50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10002dafef60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10002dafef70: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1[04]f3
0x10002dafef80: f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10002dafef90: f1 f1 f1 f1 00 f2 f2 f2 00 f2 f2 f2 00 00 00 f2
0x10002dafefa0: f2 f2 f2 f2 00 00 00 00 00 00 02 f3 f3 f3 f3 f3
0x10002dafefb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10002dafefc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==122100==ABORTING

Build error

When tried to build using make -j, I get the following error:
system/manager.cpp:69:6: error: this ‘for’ clause does not guard... [-Werror=misleading-indentation]
for (UInt32 i = 0; i < g_thread_cnt; i++)
^~~
system/manager.cpp:72:3: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘for’
if (min > _min_ts)
^~
cc1plus: all warnings being treated as errors
Makefile:28: recipe for target 'system/manager.o' failed
make: *** [system/manager.o] Error 1
make: *** Waiting for unfinished jobs....

It seems like this stops the build from completing.

No failure recovery?

I saw LOG_COMMAND and LOG_REDO are both set to false. Is that mean there's no failure recovery mechanism implemented in this repo?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.