rainlanguage / rain-studio Goto Github PK
View Code? Open in Web Editor NEWRain Studio is the ultimate platform for creating smart contracts and blockchain-based applications using Rainlang.
Home Page: https://studio.rainprotocol.xyz
Rain Studio is the ultimate platform for creating smart contracts and blockchain-based applications using Rainlang.
Home Page: https://studio.rainprotocol.xyz
Add a flag to the draft_expressions table so that users can make their draft expression publicly accessible.
Adjust the RLS policy so that select can be used to get all draft_expressions that have the flag.
CREATE POLICY select_public_draft_expressions
ON draft_expressions
FOR SELECT
USING (public = TRUE);
There are missing scenarios that we still need to cover related to the visibility of the draft expressions and how the user can interact with them. The main scenarios that have been considered so far with Josh:
Scenario 1: I have a private expression and I share it with you via a sharable link. You star it. Now, I replace the sharable link (as you described above). So, I don’t want it to appear in your starred expressions anymore either. Scenario 1 is maybe only possible if you starred the sharable slug, not the actual draft_expression id. Because even if I replace the sharable slug, it won’t change the id.
Scenario 2: I made an expression public and you discovered it in some public feed/community showcase/elsewhere. You starred it. Now, I want to make it private again, so I don’t want it to appear in your starred expressions. Scenario 2 seems easy on its own. You just never return private expressions when a user queries their starred expressions. But it’s more complicated when you consider the above.
Scenario 3: you share me a private expression via sharable link, i star it. Then you make it public. THEN, you replace the sharable link. So even though I starred it via sharable link and that link doesn’t exist anymore, I should still have it starred because now it’s public anyway.
Probably there’s other scenarios that are a mix of all of the above.
Also need to get sourcemap uploading working
Two things I noticed:
PR on #17
Page on profile needs updating to latest design:
Confirmation modal for unlinking a wallet (check the 'are you sure you want to delete this expression') modal on user/[user]/expressoins
Show current linked wallet in profile dropdown (and "Connect wallet" in its place if no wallet is connected yet):
On first connecting a wallet (on any screen, this is global across the app), or switching addresses, show a modal if the connected wallet isn't linked to the user's account yet.
If they say 'no thanks' store this preference in local storage and don't ask again for that wallet address.
However, if the connected wallet isn't linked, always show the "link this wallet" CTA on the profile dropdown.
There is a relevant issue on the cron job, because the queries are not returning all the entities that are stored. This is due the limitation of the response up to aprox 100 entities. Some possibles solutions are:
There are missing scenarios that we still need to cover related to the visibility of the draft expressions and how the user can interact with them. The main scenarios that have been considered so far with Josh:
Scenario 1: I have a private expression and I share it with you via a sharable link. You star it. Now, I replace the sharable link (as you described above). So, I don’t want it to appear in your starred expressions anymore either. Scenario 1 is maybe only possible if you starred the sharable slug, not the actual draft_expression id. Because even if I replace the sharable slug, it won’t change the id.
Scenario 2: I made an expression public and you discovered it in some public feed/community showcase/elsewhere. You starred it. Now, I want to make it private again, so I don’t want it to appear in your starred expressions. Scenario 2 seems easy on its own. You just never return private expressions when a user queries their starred expressions. But it’s more complicated when you consider the above.
Scenario 3: you share me a private expression via sharable link, i star it. Then you make it public. THEN, you replace the sharable link. So even though I starred it via sharable link and that link doesn’t exist anymore, I should still have it starred because now it’s public anyway.
Probably there’s other scenarios that are a mix of all of the above.
It should be changed how the contracts from the Interpreter Subgraph are being use in the app. The main goal is add this contracts into supabase with their data decoded from the Meta Contract that is on chain AND still using the same structure that already have the studio. The following proposal is intended to achieve this:
contract
The first proposal: would contain changes into new supabase tables where we can store the info and manage all the contracts with same bytecode, this will be using different tables.
This first table that will be created/extended is the contracts
table that will hold the most of the decoded data like the ABI. And since the ID should be unique, we can generate an UUID from a bytecode hash got it using the Introspection Contract.
ID (uuid)* | bytecode_hash (test)* | abi (json)* | contract_meta (json) | project (uuid) | created_at (timestamp)* | slug (text)* | metadata (json)* |
---|---|---|---|---|---|---|---|
UUID from bytecode hash | The bytecode hash | ABI from the Rain document | The contract meta from the Rain document | UUID that have the relation to a project (could be null) | Timestamp when was added to table | Contract slug (or contract alias) | It will follow the current format but removing some field like known addresses |
contract_addresses
The second table will solve the issue that comes - how have a control about all the addresses that are the "same" (same bytecode) and over different chains. The contract_addresses
will have an ID, the address, the chainId. Also this new table will have a relation with the contracts
table that will help to found all the addresses that have an specific ID on contracts
table. The table will have a constraint that indicate that the combination of address + chainId
should be unique.
ID (uuid)* | contract (uuid)* | address (text)* | chainId (int8)* | created_at (timestamp)* | type (text)* |
---|---|---|---|---|---|
UUID generated from address + chainId |
UUID that have the relation to a contract row |
The address of the contract | The chain id where the contract is deployed | Timestamp when was added | Type of the contract like implementation or contract (or other way to categorize them) |
deployers
The table would change how all the interpreter related info is stored, so the workflow can focus on save as much information possible from the Subgraph into the database. The deployers
table will save all the data related to the ExpressionDeployer
it self, like his bytecode_hash
, the opmeta
, metadata
to be used organized and used, etc.
ID (uuid)* | bytecode_hash (text)* | opmeta (json)* | created_at (timestamp)* |
---|---|---|---|
UUID from bytecode hash | The bytecode hash | The decoded Opmeta from the Rain document | Timestamp when was added |
deployers_addresses
This table allow the database store all the ExpressionDeployer
address with even the same bytecode hash, which will help to organize them, save space in the database avoiding duplicated data. So, it will be possible query all the addresses for a given chain, found similar deployers for a given address, or other scenarios :)
ID (uuid)* | deployer (uuid)* | address (text)* | chainId (int8)* | created_at (timestamp)* |
---|---|---|---|---|
UUID generated from address + chainId |
UUID that have the relation to a deployer row |
The address of the deployer |
The chain id where the deployer is deployed |
Timestamp when was added |
rainterpreters
As previous tables, this table will be storing all the related info about a given rainterpreter contract based on his bytecode.
Basically, this info should come from the Interpreter
in the Subgraph that hold all the data using the bytecode hash.
ID (uuid)* | bytecode_hash (text)* | created_at (timestamp)* |
---|---|---|
UUID from the bytecode hash | The bytecode hash of the Rainterpreter contract | Timestamp when was added |
rainterpreter_addresses
This table will store the particular information about each Rainterpreter contract. Similar than the rainterpreters
table, the data basically will come from the Interpreter
in the subgraph.
ID (uuid)* | rainterpreter (uuid)* | address (text)* | chainId (int8)* | created_at (timestamp)* |
---|---|---|---|---|
UUID generated from address + chainId |
UUID that have the relation to a rainterpreter row |
The address of the rainterpreter |
The chain id where the rainterpreter is deployed |
Timestamp when was added |
rainterpreter_stores
Currently, we have pretty simplest rainterpreterStore contracts on chain, but naturally new could be deployed by our updates, or some third party writing or updating his own contract. These tables propose a way to start working around that. Following the same pattern that it is done with the other tables, this two new tables will work as rainterpreter_stores
and rainterpreter_store_addresses
, so we can have the same benefits when someone search an specific address, could see other similar contracts with the same bytecode as etherscan does.
ID (uuid)* | bytecode_hash (text)* | created_at (timestamp)* |
---|---|---|
UUID from the bytecode hash | The bytecode hash of the RainterpreterStore contract | Timestamp when was added |
rainterpreter_store_addresses
ID (uuid)* | rainterpreter_store (uuid)* | address (text)* | chainId (int8)* | created_at (timestamp)* |
---|---|---|---|---|
UUID generated from address + chainId |
UUID that have the relation to a rainterpreter_stores row |
The address of the rainterpreter_stores |
The chain id where the rainterpreter is deployed |
Timestamp when was added |
As was mentioned, the goal is use the current workflow that have the studio and avoid refactoring unnecessary things. The metadata is a great way to save all the relevant information about a contract that is necessary to his use. Almost all the fields will remain in the metadata with the exception of some of them like the addresses since is handled on a different way.
Metadata format that will go into contracts -> metadata
column:
{
"name": "",
"source": "",
"description": "",
"inputs": ["input_1", "input_2"],
"expressions": ["expression_1", "expression_2"]
}
cron job
There must be a specific job that is executed from time to time to update the data in supabase. The contract meta obtained from the Subgraph should decoded using CBOR bases on their encoded and data type to be fully handled and save it into supabase tables with the specific and expected design. There are different ways to execute this, but the best way to do (at the time this is written) is use the GitHub workflows and using an schedule
that run every X minutes.
By the previous reason the job should have a determinate access to the supabase project to write all this data. This could be achieve with using service_role
API key that supabase provide. EDIT: So, we can use the pg_cron extension that supabase provide to schedule the edge function.
This happened to me when the session end meanwhile I was checking expressions dashbord.
The app crash with an 500 Internal Error
with this on the server side:
Invalid "number" argument: NaN
RangeError: Invalid "number" argument: NaN
at parseNumber (/home/nanez/terra-virtua/rain-studio/node_modules/relative-time-format/commonjs/RelativeTimeFormat.js:455:13)
at parseFormatArgs (/home/nanez/terra-virtua/rain-studio/node_modules/relative-time-format/commonjs/RelativeTimeFormat.js:485:11)
at RelativeTimeFormat.format (/home/nanez/terra-virtua/rain-studio/node_modules/relative-time-format/commonjs/RelativeTimeFormat.js:160:30)
at TimeAgo.formatDateForStep (/home/nanez/terra-virtua/rain-studio/node_modules/javascript-time-ago/commonjs/TimeAgo.js:337:48)
at TimeAgo.format (/home/nanez/terra-virtua/rain-studio/node_modules/javascript-time-ago/commonjs/TimeAgo.js:233:32)
at TimeAgo.svelte:14:27
at Object.$$render (/node_modules/svelte/internal/index.mjs:1876:22)
at eval (/src/lib/expressions/ExpressionRow.svelte:25:88)
at Object.$$render (/node_modules/svelte/internal/index.mjs:1876:22)
at eval (/src/lib/expressions/DeployedExpressionSummaryRow.svelte:45:108)
To reproduce this error, as I said before, just logout and tried to access to any user expressions dashbourd like /user/nanezx/expressions
and the app will crash.
This is a placeholder until this task can be more fleshed out
Contexts
Specifically:
This would need to support deployed expressions (by address) and draft expressions (by ID).
Only draft expressions with the new public flag can be favourited.
Probably need to create a new "starred" table
"address" and "foreign_key" should be mutually exclusive. we can write a postgres CONSTRAINT
for this...
ALTER TABLE user_favourites
ADD CONSTRAINT foreign_key_or_address
CHECK (foreign_key IS NOT NULL OR address IS NOT NULL);
something like this
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.