GithubHelp home page GithubHelp logo

vulcanizedb's Introduction

vulcanize

vulcanizedb's People

Contributors

afdudley avatar ana0 avatar chapsuk avatar d-xo avatar elizabethengelman avatar gitter-badger avatar grizz avatar gslaughl avatar i-norden avatar jameschristie avatar konstantinzolotarev avatar m0ar avatar mkrump avatar redsquirrel avatar rmulhol avatar takagoto avatar wanderingstan avatar xwvvvvwx avatar yaoandrew avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vulcanizedb's Issues

More extensive debugging for plugin build failure

Right now when the plugin fails to build we get fairly obscure error messages like:

{
  "SubCommand": "compose",
  "file": "/go/src/github.com/vulcanize/vulcanizedb/cmd/compose.go:124",
  "func": "github.com/vulcanize/vulcanizedb/cmd.compose",
  "level": "fatal",
  "msg": "unable to build .so file: exit status 1",
  "time": "2019-08-10T10:50:30Z"
}

It would be nice if we had more detailed information about why the build failed.

ERC20 Transformer

As an application developer running VulcanizeDB,
I want to be able to identify ERC20 contracts and persist that they exist,
So that I have an inventory of ERC20 contracts from which to perform further work.

As an application developer running VulcanizeDB,
I want to persist data fetched from queries against ERC20 contract,
So that I have a record of the contract's state.

Notes
For the first pass, focus queries against ERC20 contracts on the methods defined by the interface.

Geth cold import

As an application developer setting up VulcanizeDB,
I want to be able to sync core data out of a cold instance of LevelDB,
So that I can sync more quickly and not use the RPC.

Setup information

Hello,

As the Ethereum blockchain size is huge (full sync > 100Go), i would known the size needed to store the ethereum blockchain with vulcanizeDB. I didn't find anything related in requirement. I could be great to give us some words about time and size needed for a complete setup.

Thanks

Re-transform transformed events

Currently, event logs/storage diffs are generally only transformed once - e.g. by a transformer configured to watch that address. However, it's conceivable that multiple transformers would want to digest the same data differently. We could address this with scaffolding for transformers designed to work with already-transformed data.

Restart cold import in DB with blocks from another node

As a developer running VulcanizeDB,
I want to be able to resume a cold import after an interrupt even if I also have blocks synced from another node,
So that I am not blocked by an interrupted cold import.

NOTES
Restarting a cold import with no params after having already ran it for awhile yields: Error executing cold import: Won't add block that already exists.. You can work around this by specifying the starting and ending block number for the range of missing blocks. However, this should not be happening because cold import should only be attempting to add missing blocks in the first place. This appears to be a bug in the way the block repository identifies missing blocks.
This edge case is created by having two copies of the same block: one from a previous cold import, and another created by another node. The block repository treats the block as missing because one row does not match the node fingerprint of the cold import, and then the insert errors because another copy (that matches the fingerprint) exists.

Generate Queries for Log data

The current sai-service, monitors events (via the logs) for a series of contracts. The ethereum logs capture events emitted by the ethereum VM. The logs have the following format:

{
    address: "0x448a5065aebb8e423f0896e6c5d525c040f59af3",
    blockHash: "0xe627181ae6bf9835066077235ae14aa75a54e8018ac116508f783850200c15c5",
    blockNumber: 4752015,
    data: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000004492b0d72163617000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
    logIndex: 40,
    removed: false,
    topics: ["0x92b0d72100000000000000000000000000000000000000000000000000000000", "0x000000000000000000000000f07674f6ac6632e253c291b694f9c2e2ed69ebbb", "0x6361700000000000000000000000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000000000000000000000000000"],
    transactionHash: "0x80c8375c679404f73833f36be0d2903c4931764827da2a323d8ba83749c20715",
    transactionIndex: 71
}

The various topics are indexed allowing for fast retrieval. Typically the topics are encoded as follows (https://ethereum.stackexchange.com/questions/12553/understanding-logs-and-log-blooms):

  • topic0: The hex encoded sha3 hash of the method signature (e.g. web3.sha3("LogValue(bytes32)"))
  • topics 1-3: represent the remaining indexed event parameters abi encoding https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI
  • data: contains the remaining non-indexed abi encoded event parameters concatenated together.

However, it does seem that this format can be overridden. sai-service for example watches for LogNote events. These seem to be custom logging events that don't necessarily adhered to the convention described above (need to clarify this is correct) (solidity code below).

////// lib/ds-thing/lib/ds-note/src/note.sol
/// note.sol -- the `note' modifier, for logging calls as events
contract DSNote {
    event LogNote(
        bytes4   indexed  sig,
        address  indexed  guy,
        bytes32  indexed  foo,
        bytes32  indexed  bar,
        uint              wad,
        bytes             fax
    ) anonymous;

    modifier note {
        bytes32 foo;
        bytes32 bar;

        assembly {
            foo := calldataload(4)
            bar := calldataload(36)
        }

        LogNote(msg.sig, msg.sender, foo, bar, msg.value, msg.data);

        _;
    }
}

Right now we have several potential ways to capture logs for a particular address, so how to do this is clear. The part that is uncertain will be how to generate the queries for contracts events of interest. Right now (for sai-service) the generation of the filters (queries) is done via the web3 api.

tub.LogNote({ sig: methodSig('mold(bytes32,uint256)'), foo: '0x6d61740000000000000000000000000000000000000000000000000000000000' }, { fromBlock }

generates the following filter

var filterOptions = {
      "toBlock": "latest",
      "fromBlock": "0x488290",
      "address": "0x448a5065aebb8e423f0896e6c5d525c040f59af3",
      "topics": ["0x92b0d72100000000000000000000000000000000000000000000000000000000", null, "0x6d61740000000000000000000000000000000000000000000000000000000000", null]
};

Recreating this mapping from ABI to filters (or including via go or maybe web3) is likely required for users to be able to retrieve events in a way that is meaningful to them.

Categorize blocks after their intended chain has been determined (Detect reorgs, properly label uncles, etc)

From AFDudley:

If the local canonical ordering of blocks is indexed by when we saw the block. We can then generate the rest of the required tables from that. The global canonical chain, which should be the basis of most derived data served to users, will be calculated by:

finding the highest block number call it HEAD
2 . CANON_NUM = HEAD - 12
For all blocks at CANON_NUM find the one with the greatest difficulty, this should be the head of the Canonical chain.
This will need to be tested against the full chain to see if this algorithm is ever wrong.
Source of 12 as a magic number: https://github.com/ethereum/go-ethereum/blob/9e5f03b6c487175cc5aa1224e5e12fd573f483a7/core/state/database.go#L35

The fundamental problem is that this claim is "false" (it's not true in a global context, nodes don't know when they are wrong about canonicality, by definition.): https://github.com/ethereum/go-ethereum/blob/43c8a1914ce5014c3589c2de0613a06eb6ad3f69/core/blockchain.go#L76

Where re-org are defined in go-ethereum: https://github.com/ethereum/go-ethereum/blob/43c8a1914ce5014c3589c2de0613a06eb6ad3f69/core/blockchain.go#L818

Dockerize development

From gustin:

Dockerizing the development environment to isolate from instances where geth and other blockchain projects may conflict (e.g. quickbooks complicates the build in a global space).

Handler Boilerplate

As a VulcanizeDB application developer,
I want to be able to clone a handler boilerplate repo,
So that I can get to work writing my own handlers with minimal setup.

Create state trie extractor

From AFDudley:

The details of this need to be fleshed out a bit. But the general idea is we should create a tool for moving state trie data around. It should:

  1. extract state trie data from a geth leveldb
    a. as JSON blobs
    b. as IPLD (maybe CBOR) blobs
    c. as some other efficient format.
  2. inject state trie data into a geth leveldb
  3. inject state trie data into IPFS (in support of openethereum/parity-ethereum#4172)
  4. inject state trie data into our postgres DB (if we do 3 I think we can skip this.)

Add description of function to README

I was recommended this project as a tool for indexing blockchain data, perhaps similar to thegraph.com. It looks impressive, but from the Readme it's not quite clear what vulcanizedb actually does. E.g. does it put the entire contents of the eth blockchain into postgres? What is the interface for accessing the data?

Just a little "What it does" section would be very helpful!

EDIT: Found this comment on gitter with nice initial description:
https://gitter.im/vulcanizeio/VulcanizeDB?at=5b2a54b0148056028591b323
Opened PR with that text as about section: #61

More detail would still be appreciated.

Dockerfile fails to build

Found a couple issues with the Dockefile

  1. Not a valid config location (could be resolved using volumes) https://github.com/vulcanize/vulcanizedb/blob/staging/Dockerfile#L18

  2. continuousLogSync appears to longer be a valid command https://github.com/vulcanize/vulcanizedb/blob/staging/dockerfiles/startup_script.sh#L25

  3. sync appears to longer be a valid command https://github.com/vulcanize/vulcanizedb/blob/staging/dockerfiles/rinkeby/docker-compose.yml#L10

Suggestions:
Update startup_script.sh to use postgres settings from the config.toml instead of passing them in as separate environment settings
Allow commands to be passed to the container using a CLI after the container has started

contract_watcher_header_sync_transformer_test.go fails unexpectedly

The integration_tests will randomly fail on Travis, with the below error:

• Failure [0.398 seconds]
contractWatcher headerSync transformer
/home/travis/gopath/src/github.com/vulcanize/vulcanizedb/integration_test/contract_watcher_header_sync_transformer_test.go:21
  Init
  /home/travis/gopath/src/github.com/vulcanize/vulcanizedb/integration_test/contract_watcher_header_sync_transformer_test.go:39
    Initializes transformer's contract objects [It]
    /home/travis/gopath/src/github.com/vulcanize/vulcanizedb/integration_test/contract_watcher_header_sync_transformer_test.go:40
    Expected
        <int64>: 5197514
    to equal
        <int64>: 6194632
    /home/travis/gopath/src/github.com/vulcanize/vulcanizedb/integration_test/contract_watcher_header_sync_transformer_test.go:59

Summarizing 1 Failure:
[Fail] contractWatcher headerSync transformer Init [It] Initializes transformer's contract objects 
/home/travis/gopath/src/github.com/vulcanize/vulcanizedb/integration_test/contract_watcher_header_sync_transformer_test.go:59

This appears to be due to a Postgres connection timeout or similar issue, although how that results in the above error is not clear. Need to investigate further, putting this here as a reminder!

API should be the third part of vdb

The documentation is a little unclear about how to serve the data, testing section should be after API section and it should be made more clear that most users will want the graphql endpoints up and running.

Setup Fake Node for Tests

Testing against Infura leads to some flakiness, would be ideal to have a fake node that returns static data for our integration tests

What table must have rows to avoid "err: sql: no rows in result set"?

Hi Vulcanizedb team,
We are trying out your repo for the past few days. I notice when I run ./vulcanizedb contractWatcher --config=./environments/metalith.toml --mode=header or ./vulcanizedb contractWatcher --config=./environments/metalith.toml --mode=full, at times in vulcanizedb.log show "err: sql: no rows in result set".

May I know what tables are required for contractWatcher both header and full mode to run normally?

Still struggling to get some events populated in separate tables as shown in contract-watcher documentation.

Update vulcanize to properly handle dep's issue with case insensitivity (for github repos)

Based on golang/dep#433 it seems that dep is not able to handle case insensitivity, specifically with github. This is causing some weird behavior in VDB, like adding VulcanizeDB into it's own vendor directory.

Steps to reproduce:

  • clone the project
  • cd into the VulcanizeDB directory that was created
  • run dep ensure
  • notice that "github.com/vulcanize/vulcanizedb" is added to Gopkg.lock as a dependency
  • if you make a change to your local VulcanizeDB project (i.e. add an extra print statement to the sync command) and then rebuild, the change you've made will not be reflected. I think that this is because the binary that is created is using the "vendored vulcanize db" instead

A quick fix is to locally change the VulcanizeDB directory to vulcanizedb (no capitals), but we should instead figure out a better way to handle this - perhaps changing the repo name would help, but maybe there is another solution.

Dai Contract Handler

As an exchange,
I want to be able to run a VulcanizeDB handler that watches Dai,
So that I can track the state of that contract.

Notes
Current handler written for Sai, this would be for Dai 1.0. Additional Handler would need to be written for a new Dai contract.

Various data integrity checks

Maybe Add some data verification for the final database? Was thinking could maybe try to do a simple verification of the parent hash on the block vs. the previous block in db like below. Also, was manually checking total difficulty against etherscan, like the 2nd query, but maybe this could be automated.

--parent hash on block should match has of previous block
SELECT
  block_number,
  block_hash,
  block_parenthash,
  parent
FROM (
       SELECT
         block_hash,
         lag(block_hash)
         OVER (
           ORDER BY block_number ) AS parent,
         block_parenthash,
         block_number
       FROM blocks
     ) a
WHERE parent != block_parenthash
ORDER BY block_number;
--block cumulative difficulty ** can use as qc
SELECT *
FROM (
       SELECT
         block_number,
         to_char(sum(block_difficulty)
                 OVER (
                   ORDER BY block_number ), '999,999,999,999,999,999,999,999')
       FROM blocks
     ) a
WHERE block_number IN (
  4750712,
  4756962,
  4763212,
  4765177,
  4766503
);

Persist cup state on LogNewCup event

As a vulcanizedb user,
I want to be able to automatically persist a query of the contract's state after a new cup event,
So that I can automate part of my cup data persistence.

Notes

  • Assumes we already know about the contract/event being watched
  • Assumes we can just persist the data we get from a contract method call

Synchronize cold imported and rpc synced blocks

As an application developer running VulcanizeDB,
I want the option to use cold imported blocks instead of blocks synced over the rpc,
So that I can realize a performance gain on syncing and augment that data with rpc syncing.

Notes
Currently, the db keeps track of the eth node ID that added the block. Blocks added by one eth node ID are treated as non-existent during a sync from another ethereum node. Therefore, cold imported blocks won't shorten the amount of syncing required if you want to follow up with a sync over the RPC (since the cold import and rpc sync generate blocks with different node ids).
Probably best to make this an optional flag so that you can still generate different data for different nodes when seeking to validate consensus across nodes.

Build plugin from Makefile(?)

Not sure if this is possible, but - related to #127 - it'd be nice if we could separate out some of the parts of building the plugin (e.g. building a .so file) into a make command separate from executing the code.

Repair geth databases

Write a tool based on the work done in #121 to inject missing state trie information into a geth leveldb.

Create API to View Blocks

The intent of this is to demonstrate that we can query the blocks using SQL.

Possible options:

Allow users to query for blocks via GraphQL
Present a simple JSON API that executes some hardcoded queries

`yarn` command failed

Yarn:

yarn --version
1.15.2

Node:

node --version
v10.15.3

OS:

lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.2 LTS
Release:	18.04
Codename:	bionic
vulcanize/vulcanizedb/postgraphile (staging) $ yarn
yarn install v1.15.2
[1/4] Resolving packages...
[2/4] Fetching packages...
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning " > [email protected]" has unmet peer dependency "graphql@^0.10.5 || ^0.11.3 || ^0.12.0 || ^0.13.0".
warning " > [email protected]" has unmet peer dependency "graphql@^0.10.0 || ^0.11.0 || ^0.12.0 || ^0.13.1".
warning " > [email protected]" has incorrect peer dependency "typescript@^2.7".
[4/4] Building fresh packages...
error /home/ubuntu/go/src/github.com/vulcanize/vulcanizedb/postgraphile/node_modules/libpq: Command failed.
Exit code: 1
Command: node-gyp rebuild
Arguments: 
Directory: /home/ubuntu/go/src/github.com/vulcanize/vulcanizedb/postgraphile/node_modules/libpq
Output:
gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | linux | x64
gyp http GET https://nodejs.org/download/release/v10.15.3/node-v10.15.3-headers.tar.gz
gyp http 200 https://nodejs.org/download/release/v10.15.3/node-v10.15.3-headers.tar.gz
gyp http GET https://nodejs.org/download/release/v10.15.3/SHASUMS256.txt
gyp http 200 https://nodejs.org/download/release/v10.15.3/SHASUMS256.txt
gyp info spawn /usr/bin/python2
gyp info spawn args [ '/usr/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args   'binding.gyp',
gyp info spawn args   '-f',
gyp info spawn args   'make',
gyp info spawn args   '-I',
gyp info spawn args   '/home/ubuntu/go/src/github.com/vulcanize/vulcanizedb/postgraphile/node_modules/libpq/build/config.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/usr/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/home/ubuntu/.node-gyp/10.15.3/include/node/common.gypi',
gyp info spawn args   '-Dlibrary=shared_library',
gyp info spawn args   '-Dvisibility=default',
gyp info spawn args   '-Dnode_root_dir=/home/ubuntu/.node-gyp/10.15.3',
gyp info spawn args   '-Dnode_gyp_dir=/usr/lib/node_modules/npm/node_modules/node-gyp',
gyp info spawn args   '-Dnode_lib_file=/home/ubuntu/.node-gyp/10.15.3/<(target_arch)/node.lib',
gyp info spawn args   '-Dmodule_root_dir=/home/ubuntu/go/src/github.com/vulcanize/vulcanizedb/postgraphile/node_modules/libpq',
gyp info spawn args   '-Dnode_engine=v8',
gyp info spawn args   '--depth=.',
gyp info spawn args   '--no-parallel',
gyp info spawn args   '--generator-output',
gyp info spawn args   'build',
gyp info spawn args   '-Goutput_dir=.' ]
find: ‘/usr/pg*’: No such file or directory
gyp: Call to 'which pg_config || find /usr/bin /usr/local/bin /usr/pg* /opt -executable -name pg_config -print -quit' returned exit status 1 while in binding.gyp. while trying to load binding.gyp
gyp ERR! configure error 
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack     at ChildProcess.onCpExit (/usr/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:345:16)
gyp ERR! stack     at ChildProcess.emit (events.js:189:13)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12)
gyp ERR! System Linux 4.15.0-1037-aws
gyp ERR! command "/usr/bin/node" "/usr/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/ubuntu/go/src/github.com/vulcanize/vulcanizedb/postgraphile/node_modules/libpq
gyp ERR! node -v v10.15.3
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok

Link IPLD Data Points To Transactions' IPLDs

As an application developer generating reports and persisting them to IPFS,
I want to be persisting IPLDs with links to the relevant transactions,
So that I can track the data's origins on a decentralized network.

Notes
With respect to existing handlers, this would mean: when we're persisting the state of a contract as a result of a method being invoked on it, we also want to persist the transaction ID for that invocation. Then, when we're building up a report that aggregates data about those method invocations, we would want the linked transactions to be persisted on IPFS and to have links to those transactions embedded on each individual data point.

Store snapshot of contract's history on IPFS

As an application developer using vulcanize to track the history of my contract,
I want to be able to upload a snapshot of that contract's full history to IPFS,
So that I can share that info with others, in particular the end-users of my application.

Notes

  • Focus is on persisting data sourced from VulcanizeDB on a distributed network
  • Brackets questions of how to trustlessly validate that history and setup gossiping

Improve the efficiency of receipts retrieval for a block

Right now we're retrieving the receipts using the ethereum client's TransactionReceipt method. This requires retrieving the receipts one at a time over the RPC. This is obviously slow and inefficient. Ideally we'd retrieve all receipts in bulk for a given block. Two avenues worth exploring are:

  1. GetBlockReceipts in ethereum/go-ethereum/core/database_util.go. This appears to be what we'd need. The downside of this method is that we'd no longer be able to use the RPC connection and instead would need a connection to the leveldb. This would probably mean using Infura or similar services served over RPC would no longer be possible.

  2. ethereum/go-ethereum/eth/api_backend.go also wraps thedatabase_utils.go GetBlockReceipts method. This would have a similar set of cons as above. The advantage here is that the eth node could be started programatically.

Process for filling gaps in storage diffs

If we're fetching diffs off of a pubsub interface then we either need to (1) digest all diffs or (2) digest diffs coming from a specified set of watched contracts.

Digesting all diffs maximizes flexibility but increases resource usage to cache potentially unnecessary data. Digesting diffs from only known watched contracts minimizes resource usage but means that we might miss information emitted before the subscription was setup.

It would be good to have a script for extracting storage diffs that were emitted before a subscription was initiated (e.g. between the deployment block and the first block where a subscription received data).

This could naively be implemented by generating all known storage keys from event data and then querying for getStorageAt at all relevant blocks.

runtime error: invalid memory address or nil pointer dereference

Ran into this error when attempting to run godo vulcanizeDb -- --environment=ethlive against an empty postgresql database and fully fast-synced geth instance:

$ godo vulcanizeDb -- --environment=ethlive  
vulcanizeDb 3ms
2018/01/02 15:38:46 SubscribeToBlocks
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x7ebbcb]

goroutine 1 [running]:
github.com/vulcanize/vulcanizedb/vendor/github.com/ethereum/go-ethereum/core/types.(*Block).Transactions(...)
        /home/maker/go/src/github.com/vulcanize/vulcanizedb/vendor/github.com/ethereum/go-ethereum/core/types/block.go:293
github.com/vulcanize/vulcanizedb/pkg/geth.GethBlockToCoreBlock(0x0, 0xe39340, 0xc42000e248, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /home/maker/go/src/github.com/vulcanize/vulcanizedb/pkg/geth/geth_block_to_core_block.go:19 +0x8b
github.com/vulcanize/vulcanizedb/pkg/geth.(*GethBlockchain).GetBlockByNumber(0xc42006de40, 0xffffffffffffffe6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /home/maker/go/src/github.com/vulcanize/vulcanizedb/pkg/geth/geth_blockchain.go:50 +0xe5
github.com/vulcanize/vulcanizedb/pkg/history.updateBlockRange(0xe42f20, 0xc42006de40, 0xe435a0, 0xc420188900, 0xc4201d0000, 0x18, 0x18, 0xc420103dc0)
        /home/maker/go/src/github.com/vulcanize/vulcanizedb/pkg/history/populate_blocks.go:35 +0x8d
github.com/vulcanize/vulcanizedb/pkg/history.UpdateBlocksWindow(0xe42f20, 0xc42006de40, 0xe435a0, 0xc420188900, 0x18, 0xc420186000, 0xc420186100, 0xc420000208)
        /home/maker/go/src/github.com/vulcanize/vulcanizedb/pkg/history/populate_blocks.go:29 +0xd5
main.validateBlocks(0xc42006de40, 0xc42015b8f0, 0xc42001cd70, 0x42, 0x3ff0000000000000, 0x1, 0x18, 0xc420196040)
        /home/maker/go/src/github.com/vulcanize/vulcanizedb/cmd/vulcanize_db/main.go:42 +0xa0
main.main()
        /home/maker/go/src/github.com/vulcanize/vulcanizedb/cmd/vulcanize_db/main.go:67 +0x49f
exit status 2

Here's the ethlive.toml:

[database]
name = "vulcanize_ethlive"
host = "localhost"
port = 5432

[client]
ipcPath = "/home/maker/.ethereum/geth.ipc"

(godo run -- --environment=ethlive is working without any issue)

NixOS 17.09
Geth Version: 1.7.3-stable
Postgresql 10.0
Go 1.9.2 linux/amd64

Persist content addressable transaction history

As an applicant developer using Vulcanize,
I want to be able to query for IPLD data representing a contract's history,
So that I have content addressable data to validate the result sets generated by my queries.

Notes

  • Requires converting Ethereum data into IPLD and persisting it to Postgres
  • Requires constructing a relational representation of that data for optimized queries

Comments

  • Perhaps an alternative "So that" rationale - are we actually preferring IPLD data because it allows us to extend what types of data we persist, rather than to validate queries?

Every Block Handler

As a VulcanizeDB application developer,
I want to be able to run a handler that tracks every block,
So that I can get point in time information about continuous data like valuation.

Notes
The distinction here is that current handlers are triggered by events, which may or may not occur every block. This handler would always track something every block, regardless of events.

Determine How to Obtain and Decode Internal Transactions

These transactions are not published on the blockchain. They are instead the side effects of applying the data field to the current blockchain state. We need to determine how to recover these internal transactions for a given contract.

Acceptance Criterea

In the geth console be able to replicate the output of a blockchain explorer's internal transactions for a specific contract (e.g. https://etherscan.io/address/0xd26114cd6ee289accf82350c8d8487fedb8a0c07#internaltx)

Additional Information

https://ethereum.stackexchange.com/questions/3417/how-to-get-contract-internal-transactions
https://ethereum.stackexchange.com/questions/8315/confused-by-internal-transactions/8318
https://ethereum.stackexchange.com/questions/3417/how-to-get-contract-internal-transactions/3427

Support these calls via GraphQL

In large part a documentation issue.
It may be that something in front of GraphQL will be needed here.
Below are examples provided by MakerDAO

Call examples:

http://dai-service.makerdao.com/cups

It brings the actual list of CDPs with their current variables + last block the CDP was updated + if it is closed or not + if it is safe or not.
For this last field we need the actual value of ‘chi’, ‘par’, ‘mat’, ‘per’ and ‘pip’. Some of them can be saved via filters, others need a constant update via intervals due they are dynamically changing per second.

{
"lastBlockNumber":4806437,
"results":[
{
"_id":"5a428e8ff55dcf743da8b710",
"cupi":1,
"lad":"0xcd5f8fa45e0ca0937f86006b9ee8fe1eedee5fc4",
"ink":0,
"art":0,
"ire":0,
"closed":false,
"lastBlockNumber":4754490,
"safe":true
},
{
"_id":"5a428e8ff55dcf743da8b715",
"cupi":2,
"lad":"0x000dcf36d188714ec52fe527d437c486d4fb24d8",
"ink":1000000000000000000,
"art":0,
"ire":0,
"closed":false,
"lastBlockNumber":4754510,
"safe":true
},
{
"_id":"5a428e8ff55dcf743da8b718",
"cupi":3,
"lad":"0x8d44eaae757884f4f8fb4664d07acecee71cfd89",
"ink":1.2728055128824698e+22,
"art":3.926421109348261e+24,
"ire":3.9262374014950224e+24,
"closed":false,
"lastBlockNumber":4804048,
"safe":true
}
]
}

http://dai-service.makerdao.com/cupHistoryActions

{
"results":[
{
"_id":"5a428e8ff55dcf743da8b712",
"action":"open",
"cupi":1,
"sender":"0xcd5f8fa45e0ca0937f86006b9ee8fe1eedee5fc4",
"param":null,
"blockNumber":4754490,
"timestamp":1513602021,
"transactionHash":"0x53c89dade0a03228ad7312d7f682018b58ad4410df2414410ff3b66993344c54"
},
{
"_id":"5a428e8ff55dcf743da8b716",
"action":"open",
"cupi":2,
"sender":"0x000dcf36d188714ec52fe527d437c486d4fb24d8",
"param":null,
"blockNumber":4754500,
"timestamp":1513602211,
"transactionHash":"0x2a10d7adbb584c2dd6beb5d12d06f8afa0062a0757a9be89e084d7e6dc0662ae"
},
{
"_id":"5a428e8ff55dcf743da8b71a",
"action":"open",
"cupi":3,
"sender":"0x8d44eaae757884f4f8fb4664d07acecee71cfd89",
"param":null,
"blockNumber":4754500,
"timestamp":1513602211,
"transactionHash":"0x18bd197d83c62c6de70512dc18842748b27ffff98d9e0b0c6ab0278b127c096f"
}
]
}

How to setup?

I am trying to understand how VulcanizeDB works.
From what I understand it copies data from blockchain to PostgreSQL database. What is vulcanizedb sync doing, is it constantly feeding new transactions to postgre or is it doing a snapshot - is it constantly running or needs to be scheduled?
How should I deal with new transactions so I can be sure they are final - should I just ignore the 12 latest blocks, or what's the recommended strategy?
Do I query directly the postgreSQL with my queries? Do I need to/can I do any schema modifications for specific indicies or just use the default?
Can I use Amazon Aurora instead of Postgre?
Is it possible/necessary to run two ethereum clients - geth and parity to have the better certainty about transactions or is this setup not intended?

Is VulcanizeDB production ready for use on exchanges - what are the limitations?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.