GithubHelp home page GithubHelp logo

zilliqa / zilliqa Goto Github PK

View Code? Open in Web Editor NEW
1.1K 105.0 284.0 52.78 MB

Zilliqa is the world's first high-throughput public blockchain platform - designed to scale to thousands ​of transactions per second.

Home Page: https://www.zilliqa.com

License: GNU General Public License v3.0

CMake 1.52% C 0.70% C++ 81.67% Shell 1.56% Python 5.04% Dockerfile 0.15% Makefile 0.01% Rust 2.48% Solidity 1.30% JavaScript 0.01% TypeScript 5.55%
zilliqa

zilliqa's Introduction

Zilliqa

Overview

Zilliqa is a scalable smart contract platform that aims to tackle the congestion issue plaguing the blockchain industry. Zilliqa utilises a unique sharded architecture to achieve parallel processing of transactions while maintaining a large number of public nodes. Hence, Zilliqa is a blockchain capable of reaching high throughput and processing more complex computations while remaining decentralised and secure.

NOTE: The master branch is not for production as development is currently being worked constantly, please use the tag releases if you wish to work on the version of Zilliqa client that is running live on the Zilliqa blockchain.

Zilliqa Mainnet

The current live version on the Zilliqa Mainnet is Zilliqa v9.2.3 and Scilla v0.13.3.

URL(s)
API URL https://api.zilliqa.com/
Block Explorer Viewblock
DEVEX

Developer Testnet

The current live version on the Developer Testnet is Zilliqa v9.2.5 and Scilla v0.13.3.

URL(s)
API URL https://dev-api.zilliqa.com/
Block Explorer Viewblock
DEVEX
Faucet Link

Zilliqa Improvement Proposal (ZIP)

The Zilliqa Improvement Proposals (ZIPs) are the core protocol standards for the Zilliqa platform.To view or contribute to ZIP, please visit https://github.com/Zilliqa/zip

Available Features

The current release has the following features implemented:

In the coming months, we plan to have the following features:

  • Further unit and integration tests
  • Enhancement of existing features
  • More operating system support
  • And much more...

Minimum System Requirements

To run Zilliqa, we recommend the minimum system requirements specified in our Mining page.

Build from Source Code

Starting with Zilliqa v8.6.0, the officially supported operating system is Ubuntu 22.04.

If you'd like to experiment with a different distro (including the previously supported Ubuntu 18.04), please make sure to install gcc >= 11.

Run the following to install the build dependencies:

sudo apt-get update
sudo apt-get install autoconf \
    build-essential \
    ccache \
    clang-format \
    clang-tidy \
    git \
    lcov \
    libcurl4-openssl-dev \
    libssl-dev \
    libtool \
    libxml2-utils \
    ninja-build \
    ocl-icd-opencl-dev \
    pkg-config \
    python3-dev \
    python3-pip \
    libgmp-dev \
    bison \
    gawk
git submodule update --init --recursive

Run the following to install latest version of cmake. CMake version >= 3.19 must be used:

wget https://github.com/Kitware/CMake/releases/download/v3.19.3/cmake-3.19.3-Linux-x86_64.sh
mkdir -p "${HOME}"/.local
bash ./cmake-3.19.3-Linux-x86_64.sh --skip-license --prefix="${HOME}"/.local/
export PATH=$HOME/.local/bin:$PATH
cmake --version
rm cmake-3.19.3-Linux-x86_64.sh

To install, clone vcpkg to a separate location (do not use brew on macos):

$ git clone https://github.com/Microsoft/vcpkg.git /path/to/vcpkg
$ cd /path/to/vcpkg && git checkout 2022.09.27 && ./bootstrap-vcpkg.sh
$ cd /path/to/zilliqa
$ export VCPKG_ROOT=/path/to/vcpkg

As part of building our source code, we patch websocketpp 0.8.2 to compile on C++20; please see the license: https://github.com/zaphoyd/websocketpp/blob/master/COPYING.

Build Zilliqa from the source:

# build Zilliqa binary
$ ./build.sh

If you want to contribute by submitting code changes in a pull request perform the build with clang-format and clang-tidy enabled by doing:

$ ./build.sh style

Build Scilla for Smart Contract Execution

The Zilliqa client works together with Scilla for executing smart contracts. Please refer to the Scilla repository for build and installation instructions.

Boot Up a Local Testnet for Development

  1. Run the local testnet script in build directory:

    $ cd build && ./tests/Node/pre_run.sh && ./tests/Node/test_node_lookup.sh && ./tests/Node/test_node_simple.sh
  2. Logs of each node can be found at ./local_run

  3. To terminate Zilliqa:

    $ pkill zilliqa

Start a local network development environment

This is similar to the above, but deploys a local testnet to a local minikube cluster.

You can find documentation on how to do this on your local machine in docs/localdev.md.

You can find scripts which will set up an Ubuntu 22.04 machine in the cloud (or install necessary dependencies on your machine) in docs/setup/README.md.

Further Enquiries

Link(s)
Development discussion (discord)
Bug report
Security contact security 🌐 zilliqa.com
Security bug bounty HackerOne bug bounty

zilliqa's People

Contributors

ansnunez avatar art-gor avatar bb111189 avatar bzawisto avatar chetan-zilliqa avatar ckyang avatar deepgully avatar frankmeds avatar gnnng avatar iantanwx avatar its-saeed avatar jameshinshelwood avatar jazzz42 avatar jendis avatar jiayaoqijia avatar kaikawaliu avatar kaustubhshamshery avatar moboware avatar n-hutton avatar nnamon avatar revolution1 avatar robin-thomas avatar rrw-zilliqa avatar sandipbhoir avatar sharwell avatar shengguangxiao avatar sidutta avatar steve-white-uk avatar vaivaswatha avatar yaron-zilliqa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zilliqa's Issues

Incompatible enum comparison

Potential logical error.

libDirectoryService/DirectoryService.cpp:735:17: error: comparison of two values with different enumeration types ('DirectoryService::DirState' and 'DirectoryService::Action') [-Werror,-Wenum-compare]
    if (m_state != PROCESS_DSBLOCKCONSENSUS and m_requesting_last_ds_block)
        ~~~~~~~ ^  ~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.

if (m_state != PROCESS_DSBLOCKCONSENSUS and m_requesting_last_ds_block)

Block verification

  1. Complete missing block verification logic for
  • Micro block
  • Final block
  1. Signature verification of 2/3 shard members
  • if not present, append a bitmap on the microblock indicating which shard members' signatures to verify

The marco IS_LOOKUP_NODE creates unused parameters

Using IS_LOOKUP_NODE for source control is fine in general. However, the current use of it interferes with the compile option -Werror=unused-parameter (enabled by -Wextra) when a parameter is used in one condition but not used in another. Here' s an example from our codebase.

~ void Node::LogReceivedDSBlockDetails(const DSBlock& dsblock)
  {
+     //XXX: a workaround to supress -Werror=unused-parameter when not IS_LOOKUP_NODE
+     (void)dsblock;
  #ifdef IS_LOOKUP_NODE
      LOG_MESSAGE2(to_string(m_mediator.m_currentEpochNum).c_str(),
                   "I the lookup node have deserialized the DS Block");
      LOG_MESSAGE2(to_string(m_mediator.m_currentEpochNum).c_str(),
                   "dsblock.GetHeader().GetDifficulty(): "
                       << (int)dsblock.GetHeader().GetDifficulty());
      ...
      LOG_MESSAGE2(to_string(m_mediator.m_currentEpochNum).c_str(),
                   "dsblock.GetHeader().GetLeaderPubKey(): "
                       << dsblock.GetHeader().GetLeaderPubKey());
  #endif // IS_LOOKUP_NODE
  }

Fix
The workaround looks not clean and is a workaround. This page provides several options.

Handling situations when txn hashes proposed in the microblock by the leader are missing with other shard nodes

Currently if certain transactions reach the leader and the leader includes them in the microblock, consensus stalls if other nodes in the shard haven't seen that transaction. Once #34 is implemented, other shard nodes can communicate to the leader the txn bodies they are missing. The leader than then recompose a microblock with only those txns which it has realized the super-majority has. While the consensus can now be restarted, we might still wish to send the missing txn bodies to the respective nodes so that such txns can be included in the next epoch. If a sharding structure reshuffle is scheduled next epoch, such txns would be considered failed and will have to be resent by the wallet user.

Unnecessary lock on variable isVacuousEpoch

Do we need to lock isVacuousEpoch? Because based on the naming convention for shared variables and mutexes, I don't see any related mutexes for this one.

The following code could be faster if we could check isVacuousEpoch first before acquiring the two mutexes. It's unlikely that isVacuousEpoch is true so we don't need to access the mutexes in most cases.

bool isVacuousEpoch = (m_consensusID >= (NUM_FINAL_BLOCK_PER_POW - NUM_VACUOUS_EPOCHS));
{
unique_lock<mutex> g(m_mediator.m_node->m_mutexUnavailableMicroBlocks, defer_lock);
unique_lock<mutex> g2(m_mediator.m_node->m_mutexAllMicroBlocksRecvd, defer_lock);
lock(g, g2);
if(isVacuousEpoch && !m_mediator.m_node->m_allMicroBlocksRecvd)

Implement a feedback mechanism in the consensus protocol code

Currently if the leader proposes some value and the other nodes don't agree, consensus fails(just stalls under current code). We need a way for the other nodes to communicate to the leader on why they don't agree with the value so that the leader can propose an adjusted value that will pass the super-majority test.

Enable CI for Linux builds

Enabling a CI early for Linux builds will improve the ability of developers working on other operating systems to confidently submit proposed changes without breaking builds on other platforms. In particular, I would recommend enabling travis-ci for Linux build support prior to merging changes for other operating systems.

If you have any questions about getting this set up, let me know. I've worked with several other projects using travis-ci, including but not limited to ones using C++.

Use CMake out-of-source build

It is recommended to use out-of-source build for the following reasons:

  1. Enable building multiple variants under one copy of source tree : node v.s. lookup node, debug build v.s. release build
  2. Allow a simple way to run a full cleanup command similar to make distclean in GNU autotool: rm -rf BuildDir
  3. Maintain a cleaner source code directory, avoiding any temporary file from being committed.

See the CMake FAQ here

Strange ouput when running testnet script

Hello,
I see those strange output when running testnet script on MacOS Sierra. I don't know if it's right or wrong

sysctl: unknown oid 'net.core.somaxconn'
sysctl: unknown oid 'net.core.netdev_max_backlog'
sysctl: unknown oid 'net.ipv4.tcp_tw_reuse'
sysctl: unknown oid 'net.ipv4.tcp_rmem'
sysctl: unknown oid 'net.ipv4.tcp_wmem'
sysctl: unknown oid 'net.ipv4.tcp_mem'
No matching processes belonging to you were found
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
Unknown option: k
fuser: [-cfu] file ...
	-c	file is treated as mount point
	-f	the report is only for the named files
	-u	print username of pid in parenthesis
[Node 1  ] [Port 5001] ./local_run/node_0001
[Node 2  ] [Port 5002] ./local_run/node_0002
[Node 3  ] [Port 5003] ./local_run/node_0003
[Node 4  ] [Port 5004] ./local_run/node_0004
[Node 5  ] [Port 5005] ./local_run/node_0005
[Node 6  ] [Port 5006] ./local_run/node_0006
[Node 7  ] [Port 5007] ./local_run/node_0007
[Node 8  ] [Port 5008] ./local_run/node_0008
[Node 9  ] [Port 5009] ./local_run/node_0009
[Node 10 ] [Port 5010] ./local_run/node_0010
[Node 11 ] [Port 5011] ./local_run/node_0011
[Node 12 ] [Port 5012] ./local_run/node_0012
[Node 13 ] [Port 5013] ./local_run/node_0013
[Node 14 ] [Port 5014] ./local_run/node_0014
[Node 15 ] [Port 5015] ./local_run/node_0015
[Node 16 ] [Port 5016] ./local_run/node_0016
[Node 17 ] [Port 5017] ./local_run/node_0017
[Node 18 ] [Port 5018] ./local_run/node_0018
[Node 19 ] [Port 5019] ./local_run/node_0019
[Node 20 ] [Port 5020] ./local_run/node_0020
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument
sh: line 0: ulimit: open files: cannot modify limit: Invalid argument

Fetch data from the seed nodes instead of the lookup node in the synchronization phase

There are certain nodes that are built as lookup nodes. Such nodes do not participate in the consensus. They silently keep accepting and storing blocks and n/w information.

Currently, any new node joining the n/w synchronizes with the lookup node. Instead the lookup node should be sending a list of seed nodes. New or laggy nodes should synchronize with these seed nodes. Request a few and wait for responses from some, do some verification on the responses received etc.

Add initial .gitattributes

I'm working on adding an initial .gitattributes file. It's generally straightforward¹, but I have a few questions:

  1. Was the file src/depends/Makefile intended to be removed as part of 2c459ab? I see that the file is excluded per .gitignore starting in bb9e4e6, but I wanted to verify before submitting a pull request to remove all these files.
  2. While scripts should have permissions 100755 in Git, I see that other files like CMakeLists.txt and tests/Data/Test_Block.cpp are also added like this. Is it a problem to submit a pull request that changes these files from 100755→100644?

¹ I always make sure .gitattributes and .gitignore are consistent during this change, since it can otherwise lead to weird merge conflicts down the road. It's much easier to get it sorted out up front, hence the questions before sending the PR. 😄

Support for building on Windows

I'm in the process of updating the code to build on Windows. The work is not yet complete, but I wanted to post an issue early to communicate that it's at least being investigated.

Path is linked to your desktop

Zilliqa/Makefile

Lines 51 to 54 in fb3ac6b

CMAKE_SOURCE_DIR = /home/junhao/Desktop/octcoin/other_br/production/zilliqa
# The top-level build directory on which CMake was run.
CMAKE_BINARY_DIR = /home/junhao/Desktop/octcoin/other_br/production/zilliqa

Potential divided-by-zero

If numOfComms ever becomes 0, there would be a divided-by-zero crash in the following locations.

map<PubKey, Peer> & shard = m_shards.at(i % numOfComms);
shard.insert(make_pair(key, m_allPoWConns.at(key)));
m_publicKeyToShardIdMap.insert(make_pair(key, i % numOfComms));

numOfComms is defined here at

uint32_t numOfComms = m_allPoW2s.size() / COMM_SIZE;

It's not clear whether m_allPoW2s.size() is guaranteed to be non-zero before this function is called. Maybe it's better to put some check here.

Missing package in readme

While following the readme instructions to run Zilliqa locally I also needed to install the following package: build-essential.

Could you add it to the readme so other people will not run into the same problem?

final collective sig arrived before an expected state

[TID 23077][06:31:24 ][CheckMicroBlockTxnRo] Microblock root computation done DE72A476F4D705CB10DDE6845DABBA1FF375149B180B5EC56AD5EAB568AC5303
[TID 23077][06:31:24 ][CheckMicroBlockTxnRo] Expected root: DE72A476F4D705CB10DDE6845DABBA1FF375149B180B5EC56AD5EAB568AC5303
[TID 23077][06:31:24 ][CheckMicroBlockTxnRo] Root check passed
[TID 23077][06:31:24 ][MicroBlockValidator ] END
[TID 23077][06:31:24 ][Deserialize ] BEGIN
[TID 23077][06:31:24 ][Deserialize ] END
[TID 23077][06:31:24 ][VerifyMessage ] BEGIN
[TID 23077][06:31:24 ][Verify ] BEGIN
[TID 23077][06:31:24 ][Verify ] END
[TID 23077][06:31:24 ][VerifyMessage ] END
[TID 23077][06:31:24 ][GenerateCommitMessag] BEGIN
[TID 23077][06:31:24 ][Serialize ] BEGIN
[TID 23077][06:31:24 ][Serialize ] END
[TID 23077][06:31:24 ][SignMessage ] BEGIN
[TID 23077][06:31:24 ][Sign ] BEGIN
[TID 23077][06:31:24 ][Sign ] END
[TID 23077][06:31:24 ][SignMessage ] END
[TID 23077][06:31:24 ][Serialize ] BEGIN
[TID 23077][06:31:24 ][Serialize ] END
[TID 23077][06:31:24 ][GenerateCommitMessag] END
[TID 23077][06:31:24 ][SendMessage ] BEGIN
[TID 23077][06:31:24 ][SendMessageSocketCor] BEGIN
[TID 23077][06:31:24 ][SendMessageSocketCor] Sending message to <127.0.0.1:5006> (Len=138): 02050100000000777777777777777777777777777777777777777777777777777777777777777700020229B2DFF09E9F7C0896AF082B1B14D453A83C2422C354CC01B2C57A16F5B966BDE60A9C1A79B10E845069F2813BE1CC4D3EC0FCA599506226038D...
[TID 23077][06:31:24 ][SendMessageSocketCor] END
[TID 23077][06:31:24 ][SendMessage ] END
[TID 23077][06:31:24 ][ProcessMessageAnnoun] END
[TID 23077][06:31:24 ][ProcessMessage ] END
[TID 23077][06:31:24 ][ProcessMicroblockCon][Epoch 20] Consensus state = 2
[TID 23077][06:31:24 ][ProcessMicroblockCon] END
[TID 23077][06:31:24 ][Execute ] END
[TID 23077][06:31:24 ][Dispatch ] END
[TID 23077][06:31:24 ][HandleAcceptedConnec] END
[TID 23093][06:31:24 ][HandleAcceptedConnec] BEGIN
[TID 23093][06:31:24 ][HandleAcceptedConnec] Incoming message from <127.0.0.1:23247>
[TID 23093][06:31:24 ][HandleAcceptedConnec] Message received (Len=173): 0205080000000077777777777777777777777777777777777777777777777777777777777777770000000A5AC05208624D8926098EBFEC0588FE2906380BE4E25960BAD8DBCDE524E26C6865D1F6EB6DBD492A45D1276851E5647EE3C1F51E5B30B453BE...
[TID 23093][06:31:24 ][Dispatch ] BEGIN
[TID 23093][06:31:24 ][Execute ] BEGIN
[TID 23093][06:31:24 ][ProcessMicroblockCon] BEGIN
[TID 23093][06:31:24 ][ProcessMessage ] BEGIN
[TID 23093][06:31:24 ][ProcessMessageFinalC] BEGIN
[TID 23093][06:31:24 ][ProcessMessageCollec] BEGIN
[TID 23093][06:31:24 ][CheckState ] Error: Processing finalcollectivesig but response not yet done
[TID 23093][06:31:24 ][ProcessMessageCollec] END
[TID 23093][06:31:24 ][ProcessMessageFinalC] END
[TID 23093][06:31:24 ][ProcessMessage ] END
[TID 23093][06:31:24 ][ProcessMicroblockCon][Epoch 20] Consensus state = 2
[TID 23093][06:31:24 ][ProcessMicroblockCon] END
[TID 23093][06:31:24 ][Execute ] END
[TID 23093][06:31:24 ][Dispatch ] END
[TID 23093][06:31:24 ][HandleAcceptedConnec] END
[TID 23089][06:31:24 ][ProcessMessage ] BEGIN
[TID 23089][06:31:24 ][ProcessMessageCollec] BEGIN
[TID 23089][06:31:24 ][ProcessMessageCollec] BEGIN
[TID 23089][06:31:24 ][Deserialize ] BEGIN
[TID 23089][06:31:24 ][Deserialize ] END
[TID 23089][06:31:24 ][AggregateKeys ] BEGIN
[TID 23089][06:31:24 ][AggregateKeys ] END
[TID 23089][06:31:24 ][Verify ] BEGIN
[TID 23089][06:31:24 ][Verify ] END
[TID 23089][06:31:24 ][Deserialize ] BEGIN
[TID 23089][06:31:24 ][Deserialize ] END
[TID 23089][06:31:24 ][VerifyMessage ] BEGIN
[TID 23089][06:31:24 ][Verify ] BEGIN
[TID 23089][06:31:24 ][Verify ] END
[TID 23089][06:31:24 ][VerifyMessage ] END
[TID 23089][06:31:24 ][GenerateCommitMessag] BEGIN
[TID 23089][06:31:24 ][Serialize ] BEGIN
[TID 23089][06:31:24 ][Serialize ] END
[TID 23089][06:31:24 ][SignMessage ] BEGIN
[TID 23089][06:31:24 ][Sign ] BEGIN
[TID 23089][06:31:24 ][Sign ] END
[TID 23089][06:31:24 ][SignMessage ] END
[TID 23089][06:31:24 ][Serialize ] BEGIN
[TID 23089][06:31:24 ][Serialize ] END
[TID 23089][06:31:24 ][GenerateCommitMessag] END
[TID 23089][06:31:24 ][Serialize ] BEGIN
[TID 23089][06:31:24 ][Serialize ] END
[TID 23089][06:31:24 ][SendMessage ] BEGIN
[TID 23089][06:31:24 ][SendMessageSocketCor] BEGIN
[TID 23089][06:31:24 ][SendMessageSocketCor] Sending message to <127.0.0.1:5006> (Len=138): 0205050000000077777777777777777777777777777777777777777777777777777777777777770002028C36EAE70D838E5C9C7F4759F2798D40DD6044C3F9E18E8B2CEDCF8BDECF3132C815E50BE7A7A9391900FAE3A8AD086D929CE243574F37860849...
[TID 23089][06:31:24 ][SendMessageSocketCor] END
[TID 23089][06:31:24 ][SendMessage ] END
[TID 23089][06:31:24 ][ProcessMessageCollec] END
[TID 23089][06:31:24 ][ProcessMessageCollec] END
[TID 23089][06:31:24 ][ProcessMessage ] END
[TID 23089][06:31:24 ][ProcessMicroblockCon][Epoch 20] Consensus state = 6
[TID 23089][06:31:24 ][ProcessMicroblockCon] END
[TID 23089][06:31:24 ][Execute ] END
[TID 23089][06:31:24 ][Dispatch ] END
[TID 23089][06:31:24 ][HandleAcceptedConnec] END
[TID 23121][06:31:24 ][HandleAcceptedConnec] BEGIN
[TID 23121][06:31:24 ][HandleAcceptedConnec] Incoming message from <127.0.0.1:3792>
[TID 23121][06:31:24 ][HandleAcceptedConnec] Message received (Len=1069): 02060000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000050000000000010000000000000000000000000000000000000000000000000000000000000000000000640000000000000000...
[TID 23121][06:31:24 ][RetrieveBroadcastLis] BEGIN
[TID 23121][06:31:24 ][GetBroadcastList ] BEGIN
[TID 23121][06:31:24 ][GetBroadcastList ] END
[TID 23121][06:31:24 ][RetrieveBroadcastLis] END
[TID 23121][06:31:24 ][Dispatch ] BEGIN
[TID 23121][06:31:24 ][Execute ] BEGIN
[TID 23121][06:31:24 ][ProcessFinalBlock ] BEGIN
[TID 23121][06:31:24 ][ProcessFinalBlock ][Epoch 20] Waiting for state change from MICROBLOCK_CONSENSUS to PROCESS_FINALBLOCK
[TID 23114][06:31:24 ][HandleAcceptedConnec] BEGIN
[TID 23114][06:31:24 ][HandleAcceptedConnec] Incoming message from <127.0.0.1:8912>
[TID 23114][06:31:24 ][HandleAcceptedConnec] Discarding duplicate broadcast message
[TID 23114][06:31:24 ][HandleAcceptedConnec] END
[TID 23112][06:31:24 ][HandleAcceptedConnec] BEGIN
[TID 23112][06:31:24 ][HandleAcceptedConnec] Incoming message from <127.0.0.1:11472>
[TID 23112][06:31:24 ][HandleAcceptedConnec] Message received (Len=206): 020497ABA911B5C9D0A80533A4C871668E19E4C579AEA58E1528840DFACBD92BC011DFA2BF5000000000000000000000000000000000000000000000000000000000000026D40000000000000000000000000000000000000000D6942D1FA00A3CA5D26F...
[TID 23112][06:31:24 ][Dispatch ] BEGIN
[TID 23112][06:31:24 ][Execute ] BEGIN
[TID 23112][06:31:24 ][ProcessSubmitTransac] BEGIN
[TID 23112][06:31:24 ][ProcessSubmitTransac][Epoch 20] Not in ProcessSubmitTxn state -- waiting!

Final collective sig is ignored and node will stalled and not proceed on.

A fix was made in consensusbackup. It will not return false when final collective sig received but commit/response not yet done.

We need to monitor this fix.

Caught system_error with code generic:11 meaning Resource temporarily unavailable

LOG_MESSAGE:

[TID 32744][08:29:54 ][DetachedFunction ] Error: 1 times tried. Caught system_error with code generic:11 meaning Resource temporarily unavailable

[TID 32744][08:29:54 ][DetachedFunction ] Error: 2 times tried. Caught system_error with code generic:11 meaning Resource temporarily unavailable

===========================================================================

/var/log/kern.log:

Mar 24 16:29:08 liuhc-ubuntu kernel: [ 4528.091515] cgroup: fork rejected by pids controller in /user.slice/user-1000.slice
Mar 24 16:30:10 liuhc-ubuntu kernel: [ 4590.190103] do_trap: 26 callbacks suppressed

build failed

/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp: In member function ‘void Logger::LogMessage(const char*, const char*)’:
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:107:92: error: ‘put_time’ was not declared in this scope
logfile << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN)
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:112:89: error: ‘put_time’ was not declared in this scope
cout << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN) << "]["
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp: In member function ‘void Logger::LogMessage(const char*, const char*, const char*)’:
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:130:92: error: ‘put_time’ was not declared in this scope
logfile << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN)
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:136:89: error: ‘put_time’ was not declared in this scope
cout << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN)
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp: In member function ‘void Logger::LogMessageAndPayload(const char*, const std::vector&, size_t, const char*)’:
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:169:5: error: ‘unique_ptr’ was not declared in this scope
unique_ptr<char[]> payload_string = make_unique<char[]>(payload_string_len);
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:169:16: error: expected primary-expression before ‘char’
unique_ptr<char[]> payload_string = make_unique<char[]>(payload_string_len);
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:172:9: error: ‘payload_string’ was not declared in this scope
payload_string.get()[payload_string_idx++] = hex_table[(payload.at(payload_idx) >> 4) & 0xF];
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:175:5: error: ‘payload_string’ was not declared in this scope
payload_string.get()[payload_string_len-1] = '\0';
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:189:96: error: ‘put_time’ was not declared in this scope
logfile << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN)
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:196:96: error: ‘put_time’ was not declared in this scope
logfile << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN)
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:205:93: error: ‘put_time’ was not declared in this scope
cout << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN)
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:212:93: error: ‘put_time’ was not declared in this scope
cout << "[TID " << PAD(tid, TID_LEN) << "][" << PAD(put_time(gmtTime, "%H:%M:%S"), TIME_LEN)
^
/home/kenneth/Projects/Zilliqa/Zilliqa/src/libUtils/Logger.cpp:30:59: note: in definition of macro ‘PAD’
#define PAD(n, len) setw(len) << setfill(' ') << right << n
^
src/libUtils/CMakeFiles/Utils.dir/build.make:86: recipe for target 'src/libUtils/CMakeFiles/Utils.dir/Logger.cpp.o' failed
make[2]: *** [src/libUtils/CMakeFiles/Utils.dir/Logger.cpp.o] Error 1
CMakeFiles/Makefile2:1421: recipe for target 'src/libUtils/CMakeFiles/Utils.dir/all' failed
make[1]: *** [src/libUtils/CMakeFiles/Utils.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 4%] Building CXX object src/depends/common/CMakeFiles/Common.dir/CommonIO.cpp.o
[ 4%] Building CXX object src/depends/libDatabase/CMakeFiles/Overlay.dir/OverlayDB.cpp.o
[ 5%] Building CXX object src/depends/json_spirit/CMakeFiles/json_spirit.dir/json_spirit_value.cpp.o
[ 6%] Building CXX object src/depends/common/CMakeFiles/Common.dir/FileSystem.cpp.o
[ 7%] Building CXX object src/depends/common/CMakeFiles/Common.dir/FixedHash.cpp.o
[ 8%] Building CXX object src/depends/json_spirit/CMakeFiles/json_spirit.dir/json_spirit_writer.cpp.o
[ 9%] Linking CXX shared library libOverlay.so
[ 9%] Built target Overlay
[ 9%] Building CXX object src/depends/common/CMakeFiles/Common.dir/RLP.cpp.o
[ 10%] Building CXX object src/depends/common/CMakeFiles/Common.dir/SHA3.cpp.o
[ 10%] Linking CXX shared library libCommon.so
[ 10%] Built target Common
[ 10%] Linking CXX shared library libjson_spirit.so
[ 10%] Built target json_spirit
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Transaction class changes

Transaction
(
    uint32_t version,
    const boost::multiprecision::uint256_t & nonce,
    const Address & toAddr,
    const Address & fromAddr,
    const boost::multiprecision::uint256_t & amount,
    const std::array<unsigned char, TRAN_SIG_SIZE> & signature
);

fromAddr need to be replaced with pub key.

Build errors in Scheduler (Linux/ gcc compatibility issue)

Fails to build from the tar file Zilliqa-Durianv0.1.tar.gz on a fresh install of Fedora 27 x86_64, see attached log.

Installed packages:

boost-devel.x86_64                    1.64.0-4.fc27
cmake.x86_64                          3.10.1-4.fc27
gcc-c++.x86_64                        7.2.1-2.fc27
jsoncpp-devel.x86_64                  1.8.3-1.fc27
leveldb-devel.x86_64                  1.18-1.fc26
openssl-devel.x86_64                  1:1.1.0g-1.fc27

Durian_build.log

ThreadPool non-atomic addJob method

The addJob is not thread-safe since ++_jobsLeft is not locked together with the _queue.push_back, see the code.

void AddJob(const std::function<void()>& job)
{
// scoped lock
{
std::lock_guard<std::mutex> lock(_queueMutex);
#if CONTIGUOUS_JOBS_MEMORY
_queue.push_back(job);
#else
_queue.push(job);
#endif
}
// scoped lock
{
std::lock_guard<std::mutex> lock(_jobsLeftMutex);
++_jobsLeft;
}
_jobAvailableVar.notify_one();
}

Possible fix
Two mutexes should be acquired at the same time using std::lock.

Investigate threadpool not processing jobs

There are instances where network messages such as consensus are received but not processed when we conduct large scale testing. This caused some nodes to stall.

Current interim hot fix is to increase the number of threads to a larger number. However, we need to investigate this issue further.

Improve the less-than operator overloading with std::tie

Most of the block classes (e.g., MicroBlock, TxBlock) have overloaded operator < and >, the current writing is a bit verbose. See the following example.

bool MicroBlockHeader::operator<(const MicroBlockHeader & header) const

For operator <, a clearer way is to use std::tie.

    bool operator<(const S& rhs) const
    {
        // compares n to rhs.n,
        // then s to rhs.s,
        // then d to rhs.d
        return std::tie(n, s, d) < std::tie(rhs.n, rhs.s, rhs.d);
    }

For operator >, it's better to be defined in terms of <. See the suggestion.

inline bool operator< (const X& lhs, const X& rhs){ /* do actual comparison */ }
inline bool operator> (const X& lhs, const X& rhs){ return rhs < lhs; }

Support for Mac OS X

Compilation(warnings and enhancements)

  • syscall is deprecated
  • pthread unused argument during compilation
  • remove need of -l<LIBRARY_NAME> in root CMakeLists.txt and replace them by cmake FindModules and library variables in submodule CMakeLists.txt
  • remove need of hardcoded paths like -L/usr/local/Cellar/openssl/1.0.2/lib from CMakeLists.txt if possible
  • replace the sysctl commands used to enable core dumps
  • fuser unknown option k from the test script

Runtime

  • thread id incorrectly shown as -1 always in the logs

Add state root to final block composed at the last mini-epoch of every major epoch

The DS committee doesn't have the txn bodies when composing final blocks usually. Since txn body sharing is asynchronous, there are no guarantees as to when the entire network can agree on a common account state. The current plan is to make the network synchronous every 2 hrs or so(at the last min epoch of every major epoch) at the cost of extra delays and refetches for missing txn bodies and add the state root to the final block composed then. After this epoch, the sharding structure will reshuffle.

Not building on Fedora 27

[mastag@FedoraWS build]$ cmake -DCMAKE_BUILD_TYPE=Debug ..
-- The C compiler identification is GNU 7.2.1
-- The CXX compiler identification is GNU 7.2.1
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- We are on a Linux system
-- Boost version: 1.64.0
-- Boost version: 1.64.0
-- Found the following Boost libraries:
-- unit_test_framework
-- system
-- filesystem
-- boost header: /usr/include
-- boost libs : /usr/lib64/libboost_unit_test_framework.so;/usr/lib64/libboost_system.so;/usr/lib64/libboost_filesystem.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/mastag/src/Zilliqa/build
[mastag@FedoraWS build]$ make
Scanning dependencies of target Common
[ 1%] Building CXX object src/depends/common/CMakeFiles/Common.dir/Common.cpp.o
[ 2%] Building CXX object src/depends/common/CMakeFiles/Common.dir/CommonData.cpp.o
[ 2%] Building CXX object src/depends/common/CMakeFiles/Common.dir/CommonIO.cpp.o
[ 3%] Building CXX object src/depends/common/CMakeFiles/Common.dir/FileSystem.cpp.o
[ 4%] Building CXX object src/depends/common/CMakeFiles/Common.dir/FixedHash.cpp.o
[ 4%] Building CXX object src/depends/common/CMakeFiles/Common.dir/RLP.cpp.o
[ 5%] Building CXX object src/depends/common/CMakeFiles/Common.dir/SHA3.cpp.o
[ 5%] Linking CXX shared library libCommon.so
[ 5%] Built target Common
Scanning dependencies of target json_spirit
[ 5%] Building CXX object src/depends/json_spirit/CMakeFiles/json_spirit.dir/json_spirit_reader.cpp.o
[ 6%] Building CXX object src/depends/json_spirit/CMakeFiles/json_spirit.dir/json_spirit_value.cpp.o
[ 7%] Building CXX object src/depends/json_spirit/CMakeFiles/json_spirit.dir/json_spirit_writer.cpp.o
[ 7%] Linking CXX shared library libjson_spirit.so
[ 7%] Built target json_spirit
Scanning dependencies of target Utils
[ 7%] Building CXX object src/libUtils/CMakeFiles/Utils.dir/DataConversion.cpp.o
[ 8%] Building CXX object src/libUtils/CMakeFiles/Utils.dir/Logger.cpp.o
[ 9%] Building CXX object src/libUtils/CMakeFiles/Utils.dir/SanityChecks.cpp.o
[ 9%] Building CXX object src/libUtils/CMakeFiles/Utils.dir/Scheduler.cpp.o
In file included from /home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:18:0:
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:37:26: fout: ‘std::function’ has not been declared
void ScheduleAt(std::function<void (void)> f,
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:37:34: fout: expected ‘,’ or ‘...’ before ‘<’ token
void ScheduleAt(std::function<void (void)> f,
^
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:40:29: fout: ‘std::function’ has not been declared
void ScheduleAfter(std::function<void (void)> f, int64_t deltaMilliSeconds);
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:40:37: fout: expected ‘,’ or ‘...’ before ‘<’ token
void ScheduleAfter(std::function<void (void)> f, int64_t deltaMilliSeconds);
^
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:42:36: fout: ‘std::function’ has not been declared
void SchedulePeriodically(std::function<void (void)> f, int64_t deltaMilliSeconds);
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:42:44: fout: expected ‘,’ or ‘...’ before ‘<’ token
void SchedulePeriodically(std::function<void (void)> f, int64_t deltaMilliSeconds);
^
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:47:76: fout: ‘function’ is not a member of ‘std’
std::multimap<std::chrono::time_pointstd::chrono::system_clock, std::function<void (void)>> taskQueue;
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:47:76: note: suggested alternative: ‘is_function’
std::multimap<std::chrono::time_pointstd::chrono::system_clock, std::function<void (void)>> taskQueue;
^~~~~~~~
is_function
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:47:76: fout: ‘function’ is not a member of ‘std’
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:47:76: note: suggested alternative: ‘is_function’
std::multimap<std::chrono::time_pointstd::chrono::system_clock, std::function<void (void)>> taskQueue;
^~~~~~~~
is_function
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:47:96: fout: template argument 2 is invalid
std::multimap<std::chrono::time_pointstd::chrono::system_clock, std::function<void (void)>> taskQueue;
^~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.h:47:96: fout: template argument 4 is invalid
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp: In memberfunctie ‘void Scheduler::ServiceQueue()’:
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:38:30: fout: request for member ‘empty’ in ‘((Scheduler*)this)->Scheduler::taskQueue’, which is of non-class type ‘int’
while (taskQueue.empty()) {
^~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:42:31: fout: request for member ‘empty’ in ‘((Scheduler*)this)->Scheduler::taskQueue’, which is of non-class type ‘int’
while (!taskQueue.empty()) {
^~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:43:94: fout: request for member ‘begin’ in ‘((Scheduler*)this)->Scheduler::taskQueue’, which is of non-class type ‘int’
std::chrono::time_pointstd::chrono::system_clock timeToWaitFor = taskQueue.begin()->first;
^~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:50:27: fout: request for member ‘empty’ in ‘((Scheduler*)this)->Scheduler::taskQueue’, which is of non-class type ‘int’
if (taskQueue.empty())
^~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:55:18: fout: ‘function’ is not a member of ‘std’
std::function<void (void)> f = taskQueue.begin()->second;
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:55:18: note: suggested alternative: ‘is_function’
std::function<void (void)> f = taskQueue.begin()->second;
^~~~~~~~
is_function
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:55:27: fout: expected primary-expression before ‘void’
std::function<void (void)> f = taskQueue.begin()->second;
^~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:56:23: fout: request for member ‘erase’ in ‘((Scheduler*)this)->Scheduler::taskQueue’, which is of non-class type ‘int’
taskQueue.erase(taskQueue.begin());
^~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:56:39: fout: request for member ‘begin’ in ‘((Scheduler*)this)->Scheduler::taskQueue’, which is of non-class type ‘int’
taskQueue.erase(taskQueue.begin());
^~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:60:17: fout: ‘f’ was not declared in this scope
f();
^
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp: At global scope:
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:71:33: fout: variable or field ‘ScheduleAt’ declared void
void Scheduler::ScheduleAt(std::function<void (void)> f, chrono::time_pointchrono::system_clock t)
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:71:33: fout: ‘function’ is not a member of ‘std’
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:71:33: note: suggested alternative: ‘is_function’
void Scheduler::ScheduleAt(std::function<void (void)> f, chrono::time_pointchrono::system_clock t)
^~~~~~~~
is_function
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:71:42: fout: expected primary-expression before ‘void’
void Scheduler::ScheduleAt(std::function<void (void)> f, chrono::time_pointchrono::system_clock t)
^~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:71:99: fout: expected primary-expression before ‘t’
void Scheduler::ScheduleAt(std::function<void (void)> f, chrono::time_pointchrono::system_clock t)
^
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:80:36: fout: variable or field ‘ScheduleAfter’ declared void
void Scheduler::ScheduleAfter(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:80:36: fout: ‘function’ is not a member of ‘std’
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:80:36: note: suggested alternative: ‘is_function’
void Scheduler::ScheduleAfter(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~~~~~
is_function
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:80:45: fout: expected primary-expression before ‘void’
void Scheduler::ScheduleAfter(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:80:69: fout: expected primary-expression before ‘deltaMilliSeconds’
void Scheduler::ScheduleAfter(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~~~~~~~~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:85:59: fout: ‘std::function’ has not been declared
static void SchedulePeriodicallyHelper(Scheduler* s, std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:85:67: fout: expected ‘,’ or ‘...’ before ‘<’ token
static void SchedulePeriodicallyHelper(Scheduler* s, std::function<void (void)> f, int64_t deltaMilliSeconds)
^
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp: In functie ‘void SchedulePeriodicallyHelper(Scheduler*, int)’:
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:87:5: fout: ‘f’ was not declared in this scope
f();
^
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:88:62: fout: ‘deltaMilliSeconds’ was not declared in this scope
s->ScheduleAfter(bind(&SchedulePeriodicallyHelper, s, f, deltaMilliSeconds), deltaMilliSeconds);
^~~~~~~~~~~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:88:22: fout: ‘bind’ was not declared in this scope
s->ScheduleAfter(bind(&SchedulePeriodicallyHelper, s, f, deltaMilliSeconds), deltaMilliSeconds);
^~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:88:22: note: suggested alternative: ‘rand’
s->ScheduleAfter(bind(&SchedulePeriodicallyHelper, s, f, deltaMilliSeconds), deltaMilliSeconds);
^~~~
rand
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp: At global scope:
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:91:43: fout: variable or field ‘SchedulePeriodically’ declared void
void Scheduler::SchedulePeriodically(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~~~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:91:43: fout: ‘function’ is not a member of ‘std’
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:91:43: note: suggested alternative: ‘is_function’
void Scheduler::SchedulePeriodically(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~~~~~
is_function
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:91:52: fout: expected primary-expression before ‘void’
void Scheduler::SchedulePeriodically(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~
/home/mastag/src/Zilliqa/src/libUtils/Scheduler.cpp:91:76: fout: expected primary-expression before ‘deltaMilliSeconds’
void Scheduler::SchedulePeriodically(std::function<void (void)> f, int64_t deltaMilliSeconds)
^~~~~~~~~~~~~~~~~
make[2]: *** [src/libUtils/CMakeFiles/Utils.dir/build.make:135: src/libUtils/CMakeFiles/Utils.dir/Scheduler.cpp.o] Fout 1
make[1]: *** [CMakeFiles/Makefile2:1422: src/libUtils/CMakeFiles/Utils.dir/all] Fout 2
make: *** [Makefile:84: all] Fout 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.