GithubHelp home page GithubHelp logo

tachikoma000 / ethereum-nft-activity Goto Github PK

View Code? Open in Web Editor NEW

This project forked from kylemcdonald/ethereum-nft-activity

0.0 0.0 0.0 9.3 MB

Estimate the total emissions for popular CryptoArt platforms.

License: MIT License

Python 33.40% Jupyter Notebook 65.74% Shell 0.86%

ethereum-nft-activity's Introduction

ethereum-nft-activity

How much energy does it take to power popular Ethereum-backed CryptoArt platforms? And what emissions are associated with this energy use?

These questions do not have clear answers for two reasons:

  1. The overall energy usage and emissions of Ethereum are hard to estimate. I am working on this in a separate repo: kylemcdonald/ethereum-energy
  2. The portion for which a specific user, platform, or transaction might be considered "responsible" is more of a philosophical question than a technical one. Like many complex systems, there is an indirect relationship between the service and the emissions. I am working on different approaches in this notebook: Per-Transaction Models

This table represents one method for computing emissions, as of October 8, 2021. The methodology is described below.

Name Gas Transactions kgCO2
Art Blocks 8756400546 73620 3336893
Async 2682742201 23928 905177
Foundation 70824065154 395269 31629583
KnownOrigin 11452164645 61186 3977724
Makersplace 32657198905 110226 10932105
Nifty Gateway 1445519970 29279 661300
OpenSea 1592368367580 8888770 655648763
Rarible 171642953984 1246418 67217701
SuperRare 24336594302 251460 8772221
Zora 4085215807 15165 1844186

Preparation

First, sign up for an API key at Etherscan. Create env.json and add the API key. It should look like:

{
    "etherscan-api-key": "<etherscan-api-key>"
}

Install dependencies:

pip install -r requirements.txt

Note: this project requires Python 3.

contracts_footprint.py

This will pull all the transactions from Etherscan, sum the gas and transaction counts, and do a basic emissions estimate. Results are saved in the /output directory as JSON or TSV. Run the script with, for example: python contracts_footprint.py --verbose --tsv data/contracts.json data/nifty-gateway-contracts.json.

This may take longer the first time, while your local cache is updated. When updating after a week, it can take 5 minutes or more to download all new transactions. The entire cache can be multiple gigabytes.

This script has a few unique additional flags:

  • --summary to summarize the results in a format similar to the above table, combining multiple contracts into a single row of output.
  • --startdate and --enddate can be used to only analyze a specific date range, using the format YYYY-MM-DD.
  • --tsv will save the results of analysis as a TSV file instead of JSON.

contracts_history.py

This will pull all the transactions from Etherscan, sum the transaction fees and gas used, and group by day and platform. Results are saved in the /output directory as CSV files. Run the script with, for example: python contracts_history.py --verbose data/contracts.json data/nifty-gateway-contracts.json

The most recent results are cached in the gh_pages branch.

Additional flags

Both scripts have these shared additional flags:

  • --noupdate runs from cached results. This will not make any requests to Nifty Gateway or Etherscan. When using the Etherscan class in code without an API key, this is the default behavior.
  • --verbose prints progress when scraping Nifty Gateway or pulling transactions from Etherscan.

Helper scripts

  • python ethereum_stats.py will pull stats from Etherscan like daily fees and block rewards and save them to data/ethereum-stats.json
  • python nifty_gateway.py will scrape all the contracts from Nifty Gateway and save them to data/nifty-gateway-contracts.json

Methodology

The footprint of a platform is the sum of the footprints for all artwork on the platform. Most platforms use a few Ethereum contracts to handle all artworks. For each contract, we download all the transactions associated with the contract from Etherscan. Then for each transaction, we compute the kgCO2/gas footprint for that day, based on three values:

  1. The emissions intensity in kgCO2/kWh for the entire Ethereum network. This is an average of the emissions intensity for each mining pool in 2019, weighted by their percentage of the hashrate.
  2. The total power used during that day, estimated by Digiconomist.
  3. The total gas used during that day, measured by Etherscan.

The total kgCO2 for a platform is equal to the sum of the gas used for each transaction times the kgCO2/gas on that day. Finally, we add 20% to handle "network inefficiencies and unnaccounted for mining pools" as originally described by Offsetra. Offsetra has since removed this 20% from their method.

Sources

Contracts are sourced from a combination of personal research and DappRadar.

When possible, we have confirmed contract coverage directly with the marketplaces. Confirmed contracts include:

  • SuperRare: all confirmed
  • Foundation: all confirmed
  • OpenSea: some contracts on DappRadar have not been confirmed
  • Nifty Gateway: all confirmed

Limitations

  • Digiconomist's Bitcoin estimates have been criticized as low (5x too low) or high (2x too high) compared to other estimates. It may be possible to make a more accurate estimate for Ethereum following a different methodology, based on the available mining hardware and corresponding power usage. That said, even the official Ethereum website references Digiconomist when discussing the power usage. Work on a more accurate bottom-up energy and emissions estimate for Ethereum is happening in kylemcdonald/ethereum-energy.
  • Mining pool locations and the corresponding emissions intensity may have changed significantly from the 2019 values. A full correction might correspond to a +/-50% change.

How to add more platforms

To modify this code so that it works with more platforms, add every possible contract and wallet for each platform to the data/contracts.json file, using the format:

'<Platform Name>/<Contract Name>': '<0xAddress>'

Additionally, any NFT-specific contracts (e.g. ERC-721 and compatible) should be added to the data/marketplaces-collectibles-games.json file, with a similar format.

Then submit a pull request back to this repository. Thanks in advance!

To track Cryptokitties, you will need to explicitly remove the Cryptokitties contract from the blocklist in etherscan.py. Cryptokitties are blocked by default because there is a collection of Momo Wang Cryptokitties that have been grafted onto the Nifty Gateway marketplace that would otherwise require this script to pull all Cryptokitties ever.

Contracts and Addresses

Contracts and addresses used by each platform can be found in data/contracts.json and are also listed here using python print_contracts.py to generate Markdown. Nifty Gateway contracts are listed separately in data/nifty-gateway-contracts.json.

Art Blocks

Async

  • ASYNC 2020-02-25 to 2021-05-02
  • ASYNC-V2 2020-07-21 to 2021-05-03

Foundation

KnownOrigin

Makersplace

Nifty Gateway

OpenSea

Rarible

SuperRare

Zora

ethereum-nft-activity's People

Contributors

kylemcdonald avatar pippinlee avatar jamiew avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.