It should be changed how the contracts from the Interpreter Subgraph are being use in the app. The main goal is add this contracts into supabase with their data decoded from the Meta Contract that is on chain AND still using the same structure that already have the studio. The following proposal is intended to achieve this:
Tables
- contract
The first proposal: would contain changes into new supabase tables where we can store the info and manage all the contracts with same bytecode, this will be using different tables.
This first table that will be created/extended is the contracts
table that will hold the most of the decoded data like the ABI. And since the ID should be unique, we can generate an UUID from a bytecode hash got it using the Introspection Contract.
ID (uuid)* |
bytecode_hash (test)* |
abi (json)* |
contract_meta (json) |
project (uuid) |
created_at (timestamp)* |
slug (text)* |
metadata (json)* |
UUID from bytecode hash |
The bytecode hash |
ABI from the Rain document |
The contract meta from the Rain document |
UUID that have the relation to a project (could be null) |
Timestamp when was added to table |
Contract slug (or contract alias) |
It will follow the current format but removing some field like known addresses |
- contract_addresses
The second table will solve the issue that comes - how have a control about all the addresses that are the "same" (same bytecode) and over different chains. The contract_addresses
will have an ID, the address, the chainId. Also this new table will have a relation with the contracts
table that will help to found all the addresses that have an specific ID on contracts
table. The table will have a constraint that indicate that the combination of address + chainId
should be unique.
ID (uuid)* |
contract (uuid)* |
address (text)* |
chainId (int8)* |
created_at (timestamp)* |
type (text)* |
UUID generated from address + chainId |
UUID that have the relation to a contract row |
The address of the contract |
The chain id where the contract is deployed |
Timestamp when was added |
Type of the contract like implementation or contract (or other way to categorize them) |
- deployers
The table would change how all the interpreter related info is stored, so the workflow can focus on save as much information possible from the Subgraph into the database. The deployers
table will save all the data related to the ExpressionDeployer
it self, like his bytecode_hash
, the opmeta
, metadata
to be used organized and used, etc.
ID (uuid)* |
bytecode_hash (text)* |
opmeta (json)* |
created_at (timestamp)* |
UUID from bytecode hash |
The bytecode hash |
The decoded Opmeta from the Rain document |
Timestamp when was added |
- deployers_addresses
This table allow the database store all the ExpressionDeployer
address with even the same bytecode hash, which will help to organize them, save space in the database avoiding duplicated data. So, it will be possible query all the addresses for a given chain, found similar deployers for a given address, or other scenarios :)
ID (uuid)* |
deployer (uuid)* |
address (text)* |
chainId (int8)* |
created_at (timestamp)* |
UUID generated from address + chainId |
UUID that have the relation to a deployer row |
The address of the deployer |
The chain id where the deployer is deployed |
Timestamp when was added |
- rainterpreters
As previous tables, this table will be storing all the related info about a given rainterpreter contract based on his bytecode.
Basically, this info should come from the Interpreter
in the Subgraph that hold all the data using the bytecode hash.
ID (uuid)* |
bytecode_hash (text)* |
created_at (timestamp)* |
UUID from the bytecode hash |
The bytecode hash of the Rainterpreter contract |
Timestamp when was added |
- rainterpreter_addresses
This table will store the particular information about each Rainterpreter contract. Similar than the rainterpreters
table, the data basically will come from the Interpreter
in the subgraph.
ID (uuid)* |
rainterpreter (uuid)* |
address (text)* |
chainId (int8)* |
created_at (timestamp)* |
UUID generated from address + chainId |
UUID that have the relation to a rainterpreter row |
The address of the rainterpreter |
The chain id where the rainterpreter is deployed |
Timestamp when was added |
- rainterpreter_stores
Currently, we have pretty simplest rainterpreterStore contracts on chain, but naturally new could be deployed by our updates, or some third party writing or updating his own contract. These tables propose a way to start working around that. Following the same pattern that it is done with the other tables, this two new tables will work as rainterpreter_stores
and rainterpreter_store_addresses
, so we can have the same benefits when someone search an specific address, could see other similar contracts with the same bytecode as etherscan does.
ID (uuid)* |
bytecode_hash (text)* |
created_at (timestamp)* |
UUID from the bytecode hash |
The bytecode hash of the RainterpreterStore contract |
Timestamp when was added |
- rainterpreter_store_addresses
ID (uuid)* |
rainterpreter_store (uuid)* |
address (text)* |
chainId (int8)* |
created_at (timestamp)* |
UUID generated from address + chainId |
UUID that have the relation to a rainterpreter_stores row |
The address of the rainterpreter_stores |
The chain id where the rainterpreter is deployed |
Timestamp when was added |
Metadata scheme
As was mentioned, the goal is use the current workflow that have the studio and avoid refactoring unnecessary things. The metadata is a great way to save all the relevant information about a contract that is necessary to his use. Almost all the fields will remain in the metadata with the exception of some of them like the addresses since is handled on a different way.
Metadata format that will go into contracts -> metadata
column:
{
"name": "",
"source": "",
"description": "",
"inputs": ["input_1", "input_2"],
"expressions": ["expression_1", "expression_2"]
}
Execution
- cron job
There must be a specific job that is executed from time to time to update the data in supabase. The contract meta obtained from the Subgraph should decoded using CBOR bases on their encoded and data type to be fully handled and save it into supabase tables with the specific and expected design. There are different ways to execute this, but the best way to do (at the time this is written) is use the GitHub workflows and using an schedule
that run every X minutes.
By the previous reason the job should have a determinate access to the supabase project to write all this data. This could be achieve with using service_role
API key that supabase provide. EDIT: So, we can use the pg_cron extension that supabase provide to schedule the edge function.