GithubHelp home page GithubHelp logo

openrarity / open-rarity Goto Github PK

View Code? Open in Web Editor NEW
219.0 15.0 48.0 10.14 MB

Reference implementation of the OpenRarity protocol with Python.

License: Apache License 2.0

Python 100.00%
apache2 nft open-source opensea rarity curio etherium proof icy crypto

open-rarity's Introduction

OpenRarity

Version Test CI License

OpenRarity

We’re excited to announce OpenRarity, a new rarity protocol we’re building for the NFT community. Our objective is to provide a transparent rarity calculation that is entirely open-source, objective, and reproducible.

With the explosion of new collections, marketplaces and tooling in the NFT ecosystem, we realized that rarity ranks often differed across platforms which could lead to confusion for buyers, sellers and creators. We believe it’s important to find a way to provide a unified and consistent set of rarity rankings across all platforms to help build more trust and transparency in the industry.

We are releasing the OpenRarity library in a Beta preview to crowdsource feedback from the community and incorporate it into the library evolution.

See the full announcement in the blog post.

Developer documentation

Read developer documentation on how to integrate with OpenRarity.

Setup and run tests locally

poetry install # install dependencies locally
poetry run pytest # run tests

Some tests are skipped by default due to it being more integration/slow tests. To run resolver tests:

poetry run pytest -k test_testset_resolver --run-resolvers

Library usage

You can install open rarity as a python package to use OpenRarity in your project:

pip install open-rarity

Please refer to the scripts/ folder for an example of how to use the library.

If you have downloaded the repo, you can use OpenRarity shell tool to generate json or csv outputs of OpenRarity scoring and ranks for any collections:

python -m scripts.score_real_collections boredapeyachtclub proof-moonbirds

Read developer documentation for advanced library usage

Contributions guide and governance

OpenRarity is a community effort to improve rarity computation for NFTs (Non-Fungible Tokens). The core collaboration group consists of four primary contributors: Curio, icy.tools, OpenSea and Proof

OpenRarity is an open-source project and all contributions are welcome. Consider following steps when you request/propose contribution:

  • Have a question? Submit it on OpenRarity GitHub discussions page
  • Create GitHub issue/bug with description of the problem link
  • Submit Pull Request with proposed changes
  • To merge the change in the main branch you required to get at least 2 approvals from the project maintainer list
  • Always add a unit test with your changes

We use git-precommit hooks in OpenRarity repo. Install it with the following command

poetry run pre-commit install

Project Setup and Core technologies

We used the following core technologies in OpenRarity:

  • Python ≥ 3.10.x
  • Poetry for dependency management
  • Numpy ≥1.23.1
  • PyTest for unit tests

License

Apache 2.0 , OpenSea, ICY, Curio, PROOF

open-rarity's People

Contributors

amamujee avatar aschlosberg avatar block-chaynes avatar dadashi avatar damjankuznar avatar grantli-os avatar impreso avatar jerome-qn avatar julianaticy avatar ryanio avatar snuderl avatar stephankmin avatar theelderbeever avatar vickygos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-rarity's Issues

Allow user to control token batch size

Is your feature request related to a problem? Please describe.
Currently the get_collection_from_opensea function variable batch_size is hard-coded to 30.

Describe the solution you'd like
I'd like a solution allowing a user to control this variable to speed up the processing.

Additional context
While the OS docs don't limit token_ids to a max length, the query results are capped at 50. Therefore, is it best to set this batch size to 50, or does the response return more than 50 assets given token_ids are set to a greater length? If there is no cap, then I think it's best to allow the user to decide the batch size and if there is a cap, then the batch size should be capped at the max length i.e 50.

Number/Date display_type support

Handle non-categorical display_type's (number, date).

Initial strategy to involve binning based on total range and quantity of unique numeric values.

Empty JSON querying Arbitrum OS collection

Describe the bug
Trying to query tour-de-berance collection on Arbitrum OpenSea
Is there an need to specify the Arbitrum RPC?

python -m scripts.score_real_collections tour-de-berance
Scoring collections: ['tour-de-berance'] with use_cache=True
Output file prefix: score_real_collections_results with type .json
Generating results for: tour-de-berance
No opensea cache file found for tour-de-berance: cached_data/tour-de-berance_cached_os_trait_data.json

Created collection tour-de-berance with 0 tokens
Token ID and their ranks and scores, sorted by rank
Outputted results to: score_real_collections_results_tour-de-berance.json
Finished scoring and ranking collections. Output files:
	score_real_collections_results_tour-de-berance.json

Information

https://opensea.io/collection/tour-de-berance
ERC721
Arbitrum

To Reproduce
Steps to reproduce the behavior:
python -m scripts.score_real_collections tour-de-berance

Expected behavior
Should export json with rarit

Environment

  • OS: MacOS 11.4
  • Python 3.10.3

Additional context
Add any other context about the problem here.

Support Python 3.11

Self explanatory. Python 3.11 released on Oct 24th, 2022. We need to verify dependency support and forward support of the library.

Fetch Assets using GraphQL

Now, that we are adding more and more to the Assets Endpoint, can the OpenRarity team use your Clout and have OpenSea open up their GraphQL to everyone? That way we can define exactly what fields we want to fetch. Thanks 😀

Error when parsing, calling lower() to an int

Describe the bug
On a few collections there is an issue when parsing it's traits. deadfellaz, cyberbrokers and more

Information

  • Collection link/name: deadfellaz or cyberbrokers
  • Contract standard: erc721
  • Chain: ethereum mainnet

To Reproduce
Steps to reproduce the behavior:
run: python3 -m scripts.score_real_collections deadfellaz

Expected behavior
Int traits should not be lowercased

Environment

  • OS: iOS
  • Python = 3.10.x
  • Library Version = 0.7.0b0

Additional context
Stacktrace:

python3 -m scripts.score_real_collections deadfellaz
Scoring collections: ['deadfellaz'] with use_cache=True
Output file prefix: score_real_collections_results with type .json
Generating results for: deadfellaz
No opensea cache file found for deadfellaz: cached_data/deadfellaz_cached_os_trait_data.json
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/scripts/score_real_collections.py", line 117, in <module>
    score_collection_and_output_results(
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/scripts/score_real_collections.py", line 47, in score_collection_and_output_results
    collection = get_collection_from_opensea(slug, use_cache=use_cache)
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/open_rarity/resolver/opensea_api_helpers.py", line 391, in get_collection_from_opensea
    tokens = get_all_collection_tokens(
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/open_rarity/resolver/opensea_api_helpers.py", line 206, in get_all_collection_tokens
    tokens_batch = get_tokens_from_opensea(
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/open_rarity/resolver/opensea_api_helpers.py", line 277, in get_tokens_from_opensea
    token_metadata = opensea_traits_to_token_metadata(asset_traits=asset["traits"])
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/open_rarity/resolver/opensea_api_helpers.py", line 145, in opensea_traits_to_token_metadata
    string_attr[trait["trait_type"]] = StringAttribute(
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/open_rarity/models/token_metadata.py", line 30, in __init__
    self.value = normalize_attribute_string(value)
  File "/Users/maximiliano/repos/pixel/localhost/open-rarity/open_rarity/models/utils/attribute_utils.py", line 17, in normalize_attribute_string
    return value.lower().strip()
AttributeError: 'int' object has no attribute 'lower'

How are we suppose to use this library with a local dataset?

I pulled this code down and have been very confused with the documentation on how to run it on a local data set. I'm looking at this piece of code below.

I have a CSV which has all the asset names along with the occurrences. What do I need to do to feed this data to calculate the rarities? Is it possible to just feed a dictionary with all the assets at once?

If you can show me an example of what I need to do that would be great!

Thank you!

from open_rarity import Collection, OpenRarityScorer, Token
from open_rarity.rarity_ranker import RarityRanker

if __name__ == "__main__":
    scorer = OpenRarityScorer()

    collection = Collection(
        name="My Collection Name",
        tokens=[
            Token.from_erc721(
                contract_address="0xa3049...",
                token_id=1,
                metadata_dict={"hat": "cap", "shirt": "blue"},
            ),
            Token.from_erc721(
                contract_address="0xa3049...",
                token_id=2,
                metadata_dict={"hat": "visor", "shirt": "green"},
            ),
            Token.from_erc721(
                contract_address="0xa3049...",
                token_id=3,
                metadata_dict={"hat": "visor", "shirt": "blue"},
            ),
        ],
    )  # Replace inputs with your collection-specific details here

    # Generate scores for a collection
    token_scores = scorer.score_collection(collection=collection)

    print(f"Token scores for collection: {token_scores}")

    # Generate score for a single token in a collection
    token = collection.tokens[0]  # Your token details filled in
    token_score = scorer.score_token(collection=collection, token=token)

    # Better yet.. just use ranker directly!
    ranked_tokens = RarityRanker.rank_collection(collection=collection)
    for ranked_token in ranked_tokens:
        print(
            f"Token {ranked_token.token} has rank {ranked_token.rank} "
            "and score {ranked_token.score}"
        )

Semi-Fungible Token support

OpenRarity to support Semi-Fungible tokens such as ERC1155. Requires handling individual token_supply to modify trait counts.

Rarity seems to be pulling from total potential contract supply, rather than current minted supply

My client decided to end mint early. The contract's potential max supply is 6969, but current minted supply is at 3950. We switched to IPFS, with 50 additional tokens in our metadata, which means there are 4000 tokens with trait metadata in total - although the last 50 of those have not been minted yet. OpenRarity rankings are showing numbers of 4,000, up to 6,000+ for rarity ranking.

Information

To Reproduce
Steps to reproduce the behavior:
Viewing tokens on Opensea, we can see that OpenRarity is giving rankings well outside of the current supply with trait metadata.

Expected behavior
Rarity will be calculated based on the total number of tokens with trait metadata.

Version 1.0.0

The following is a list of issues to be completed prior to release of openrarity 1.0.0.

  • Data Model Refactor #73
  • Checksum input/output verification #55
  • Semi-Fungible Token support #87
  • Number/Date display_type support #88
  • (Optional) Python 3.11 support #90

OpenRarity rank not calculating correctly on Skulptuur collection

Do you notice the rarity rank for all other assets with that trait? They are ranked 21-80. #118 is the only asset with that trait outside of that range and it's ranked 975. It also has a rare environment trait, which is why it should be among the rarest of that group.

Previous reply copied below was incorrect and issue improperly closed:
Hi thanks for reporting. This is not a bug and OR works as expected. I checked the collection and Camera Height = High is not a unique trait. Here are all assets with this trait:
https://opensea.io/collection/skulptuur-by-piter-pasma?search[sortAscending]=true&search[sortBy]=UNIT_PRICE&search[stringTraits][0][name]=camera_height&search[stringTraits][0][values][0]=high

We consider whole trait combination when we compute rarity not a single trait and this particular asset has much less rare traits in the combination.

Originally posted by @impreso in #114 (comment)

IROCADERU 80

Describe the bug
A clear and concise description of what the bug is.

Information

  • Collection link/name
  • Contract standard
  • Chain

To Reproduce
Steps to reproduce the behavior:

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment

  • OS: [e.g. iOS]
  • Python [e.g. 3.10.3]
  • Library Version [e.g. 1.0]

Additional context
Add any other context about the problem here.

Attribute weighting should take into consideration number of null traits

Problem
Let’s say a collection of 5 tokens exist and these are the traits for each token, in order:
[
{"bottom": "1", "hat": "1", "special": "true"},
{"bottom": "1", "hat": "1"},
{"bottom": "2", "hat": "2"},
{"bottom": "2", "hat": "2"},
{"bottom": "3", "hat": "2"},
]
Now, say there’s another collection of 5 tokens, and these are the traits
[
{"bottom": "1", "hat": "1", "special": "true"},
{"bottom": "1", "hat": "1", "special": "false"},
{"bottom": "2", "hat": "2", "special": "false"},
{"bottom": "2", "hat": "2", "special": "false"},
{"bottom": "3", "hat": "2", "special": "false"},
]
The only difference is that collection 1 leaves out special while collection 2 explicitly says special=false.
Currently get_token_attributes_scores_and_weights, whose weight outputs are used by arithmetic mean, etc. is different for the above two collections' tokens.

Describe the solution you'd like
The above collections should have identical scores for the tokens and can be done systematically by an accounting for a null-type trait.

E.g.

attr_weights = [
            1 / collection.total_attribute_values(attr_name)
            for attr_name in sorted_attr_names
        ]

should become:

attr_weights = [
            1 / (collection.total_attribute_values(attr_name) + (1 if attr_name in null_attributes else 0))
            for attr_name in sorted_attr_names
        ]

Additional context
Note: Group has actually agreed to do this but we will defer this change until we decide on the algorithm for OpenRarity.

TokenMetadata doesn't support duplicate trait names with differing values

Describe the bug
Some token collections such as Crypto Coven #117 have multiple of the same trait name (Top) with differing values. Currently, the token metadata parses metadata to a dictionary which will overwrite the duplicate names.

Information

To Reproduce
Process Crypto Coven however, it is a silent bug.

Expected behavior
Don't overwrite the value

Environment

  • OS: [e.g. iOS]: Darwin
  • Python [e.g. 3.10.3]: 3.10.6
  • Library Version [e.g. 1.0]: 0.5.0-beta

Replicate the product on Javascript

It's been amazing to use so far on Python. That said, is it possible to recreate it on JS too?

It would also be great if we could still use NPM / Yarn to handle the packages similarly to pip.

Thanks

Score/Rank inconsistency on azukielementals

Describe the bug
Higher rank have lower score. More precisely, the 10 highest ranked have a low score.

Information
The collection is azukielementals. I'm pulling the data from OpenSea. I am attaching the 2 data files, 1 results file, 1 python file through this google drive link: https://drive.google.com/drive/folders/1dSEj1IHBJ4pDyePiBYlsArCw5GwAqXJz?usp=sharing

To Reproduce
The code I used to generate 'computed_rarity.csv' is in 'rarity_helper.py' (attached in the drive). The data is used is also attached there.

Expected behavior
I would expect higher ranks to have higher scores.

Screenshots
Screen Shot 2024-01-29 at 11 09 27 AM

Environment

  • OS: Ubuntu 20.04.6 LTS
  • Python: 3.10.13
  • Library Version: 0.7.5

Add multi-process capability to the library

Is your feature request related to a problem? Please describe.
Speed up resolver script with the parallelization

Describe the solution you'd like
Add multiprocess support to the script to improve the performance

Hashmasks Elementals collection rankings seem wrong

Describe the bug
I have a Hashmasks Elementals NFT that is ranked #23 in the collection. I noticed that the Hashmasks Elementals NFT that is ranked #5 in the collection has traits that are either equal to or more common than those on my NFT, yet it is considered 18 ranks "more rare."

Information

To Reproduce
Look at the OpenRarity rankings for Hashmasks Elementals #5071 vs. #1535, and look at the traits associated with each:

  • Both have "Red Wood Puppet" base
  • Both have "Downward" eyes
  • Specialty for #5071 is "Gold Clock [2%]" vs. #1535's "None [3%]"
  • Season for #5071 is "Deep Winter [0.85%]" vs. #1535's "None [2%]"
  • Background for #5071 is "Circle Speckle [5%]" vs. #1535's "Circle Yellow [14%]"

Expected behavior
I would expect #5071 to have a higher OpenRarity score/ranking than #1535.

Screenshots
#1535:
image
#5071:
image

Environment

  • OS: macOS Ventura 13.3.1 (a) (22E772610a)
  • Python: 3.10.4

Additional context
Nothing at this time.

OpenRarity not updating metadata rank from burned bots

Describe the bug
NFTs that have been sent to burn addresses are not updating in rarity rank updated despite having their metadata updated. Our NFTs collection has a burn mechanic that sends NFTs to the burn address (0xDeadb071ab55db23Aea4cF9b316faa8B7Bd26196). When an NFT is burnt, we then update the metadata and image to reflect that; however the rarity rank does not update. As a result, we have burned bots ranking higher than other NFTs depending on the rarity of their previous metadata.

Information

To Reproduce

  1. Create a collection with various OpenRarity rankings on the Ethereum Mainnet.
  2. Send various NFTs within the collection to the burn address (0xDeadb071ab55db23Aea4cF9b316faa8B7Bd26196)
  3. Delete the attributes metadata of the bots that have been sent to the burn address.
  4. Notice the OpenRarity ranks do not update.

Expected behavior
NFTs that have no attribute metadata within a collection should either not be ranked or ranked at the bottom of a collection in terms of rarity, regardless if it has been sent to a burn address or not. I have uploaded a test collection on Testnet with the exact same metadata as the live collection on Mainnet, and the NFTs with burnt metadata within the Testnet collection are not being ranked, which is what we expect from the Mainnet collection.

Test collection link: https://testnets.opensea.io/collection/pawn-bots-hfbp4rvtvj
Mainnet collection link: https://opensea.io/collection/pawnbots

Screenshots
Pawn Bot 7827 has been sent to the burn address and has the OpenRarity rank of 67, even though its metadata data has been updated and its previously rare attributes were taken away.
Screen Shot 2022-12-12 at 10 19 36 AM

The same NFT on testnet, is not ranked, which is how it should be. (This NFT has NOT been sent to a burn address)
Screen Shot 2022-12-12 at 11 07 39 AM

Pawn Bot 2993 has the exact same metadata as Pawn Bot 7827 but they both have different rarity ranks.
Screen Shot 2022-12-12 at 11 36 09 AM

Environment
OpenSea website, all devices

Additional context
The Mainnet collection on OpenSea shows a total item count of 7,373 in the collection when total total Token count is really 8,888 the difference is due to OpenSea not taking burned NFTs into account.
The Testnet collection shows a total item count of 8,888 in the collection because no NFTs here have been sent to the burn.
It looks like OpenSea is accounting for those being sent to the burn address and removing them from the collection count. If OpenRarity is also ignoring those being sent the burn address, that could be part of the reasoning why it rarity isn't updating on the main collection.

We appreciate any help or insight on this issue. Our team hopes to use OpenRarity as our main source of truth for rarity, and I hope this issue will also help any other projects who may be experiencing similar issues. Thanks!

Proposed fixes to the documentation

I've just tested this and spotted a couple of minor issues with the Gitbook documentation:
https://openrarity.gitbook.io/developers/quick-guides/integrating-openrarity-in-your-application

In the second code block (when leveraging the Opensea API):

The import:
from open_rarity import Collection

Should be:
from open_rarity import RarityRanker

And towards the bottom, when iterating through the results, the result must be dereferenced by token before accessing token_identifier.

So in all, the code block should look something more like this:

# OpenRarity version 0.4.0-beta
from open_rarity import RarityRanker
from open_rarity.resolver.opensea_api_helpers import (
    get_collection_from_opensea,
)

slug = 'proof-moonbirds'
# Create OpenRarity collection object from OpenSea API
collection = get_collection_from_opensea(slug)

# Generate scores for a collection
ranked_tokens = RarityRanker.rank_collection(collection=collection)

# Iterate over the ranked and sorted tokens
for token in ranked_tokens:
    token_id = token.token.token_identifier.token_id
    rank = token.rank
    score = token.score
    print(f"\tToken {token_id} has rank {rank} score: {score}")

Using OpenSea Helpers with Mutant Ape Yacht Club

Describe the bug
Hi, this library has been very helpful for me so far. I am currently trying to use the get_collection_from_opensea function with the slug for Mutant Ape Yacht Club (mutant-ape-yacht-club). For some reason, I am only getting ~14,500 tokens back when there are closer to ~19,500 total tokens. collection.token_total_supply is at ~14,500.

Does anyone know why this is the case? Thanks very much!

Information

To Reproduce
Steps to reproduce the behavior:
collection = get_collection_from_opensea('mutant-ape-yach-club')
print(collection.token_total_supply)

Expected behavior
A clear and concise description of what you expected to happen.
Retrieve ~19,500 tokens

Screenshots
If applicable, add screenshots to help explain your problem.

Environment

  • OS: [e.g. iOS]
  • Python [e.g. 3.10.3]
  • Library Version [e.g. 1.0]

Additional context
Add any other context about the problem here.

Add `max_score` in addition to `max_rank`

For anyone who'd like to use the score attribute for any subsequent calculations, it would be helpful to also understand the max_score per token, but also perhaps per collection (if it varies collection to collection).

OpenRarity rank not calculating correctly on Skulptuur collection

Describe the bug
A clear and concise description of what the bug is.

Information
OpenRarity incorrectly calculating rarity.

To Reproduce
Steps to reproduce the behavior: Compare rarity rank betweek sites. Look at traits in collection. Camera heigh = high is the second rarest trait and rank should reflect that.

Expected behavior
Recalculate rarity on this collection to accurately reflect traits.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment

  • OS: [e.g. iOS]
  • Python [e.g. 3.10.3]
  • Library Version [e.g. 1.0]

Additional context
Add any other context about the problem here.

About contributing to the golang version of openrarity

OpenRarity is an outstanding effort that is trying to smooth out the variance in rarity algorithms across platforms.

Although the official Python implementation is provided, as a basic library, it would be better if there were more common language implementations (such as Javascript, Golang, and Rust). Because the calculation of rarity is not limited to an offline script, it is likely to be used for real-time calculation by back-end services.

Therefore, I spent some time to provide the implementation of the Golang version: https://github.com/Base-Labs/openrarity-go.

I tried to make sure during the implementation that the design of functions, classes, and single tests are consistent with OpenRarity/open-rarity, which will make adding new features or locating differences very easy in the future. After a lot of work, it computes in perfect agreement with the official implementation.

Now I would like to contribute openrarity-go to OpenRarity for the following reasons.

  1. OpenRarity is a group, and open-rarity is just a python implementation of it, which logically should contain implementations in more languages.
  2. putting openrarity-go under https://github.com/OpenRarity/ makes it easier for developers to discover and use it.
  3. post maintenance of openrarity-go can be left to the community, but for now it looks like v1.0.0 is more than enough to use.

Any response is worth waiting for.

Error when running testset_resolver

Describe the bug
I ran python3 -m open_rarity.resolver.testset_resolver external to try to get a test example working (not using caching) for cool-cats-nft. See output log below.

Information

  • Collection link/name - cool-cats-nft
  • Contract standard - Not sure
  • Chain - ETH

To Reproduce
Steps to reproduce the behavior:
python3 -m open_rarity.resolver.testset_resolver external

Expected behavior
A clear and concise description of what you expected to happen.
It should output the rarity for cool-cats-nft

Screenshots

Executing main: with Namespace(resolve_external_rarity='external', cache_fetched_data=True, filename='test_collections.json')
Fetching collection and token trait data for: cool-cats-nft
Fetching external rarity ranks for: cool-cats-nft
Starting batch 0 for collection cool-cats-nft: Processing 293 tokens
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/jamesuejio/Crypto/open-rarity/open_rarity/resolver/testset_resolver.py", line 568, in <module>
    resolve_collection_data(
  File "/Users/jamesuejio/Crypto/open-rarity/open_rarity/resolver/testset_resolver.py", line 226, in resolve_collection_data
    tokens_with_rarity: list[TokenWithRarityData] = get_tokens_with_rarity(
  File "/Users/jamesuejio/Crypto/open-rarity/open_rarity/resolver/testset_resolver.py", line 151, in get_tokens_with_rarity
    external_rarity_provider.fetch_and_update_ranks(
  File "/Users/jamesuejio/Crypto/open-rarity/open_rarity/resolver/rarity_providers/external_rarity_provider.py", line 483, in fetch_and_update_ranks
    self._add_rarity_sniffer_rarity_data(
  File "/Users/jamesuejio/Crypto/open-rarity/open_rarity/resolver/rarity_providers/external_rarity_provider.py", line 364, in _add_rarity_sniffer_rarity_data
    token_ids_to_ranks = fetch_rarity_sniffer_rank_for_collection(
  File "/Users/jamesuejio/Crypto/open-rarity/open_rarity/resolver/rarity_providers/external_rarity_provider.py", line 123, in fetch_rarity_sniffer_rank_for_collection
    tokens_to_ranks: dict[int, int] = {
  File "/Users/jamesuejio/Crypto/open-rarity/open_rarity/resolver/rarity_providers/external_rarity_provider.py", line 124, in <dictcomp>
    str(nft["id"]): int(nft["positionId"]) for nft in response.json()["data"]
TypeError: string indices must be integers

Environment

  • OS: [e.g. iOS] - MacOS Monterey 12.2.1
  • Python [e.g. 3.10.3] - 3.10.7
  • Library Version [e.g. 1.0] - Latest master

Additional context
Add any other context about the problem here.
I was just trying to play around and get a test example working (not using caching).

Add a checksum to the top-level output of rank_collection

It would be helpful to be able to see a checksum of the traits used to generate rarity so that publishers can compare to each other and see that we all used the same input to generate our ranks.

Describe the solution you'd like

result = RarityRanker.rank_collection(
   collection=collection
)

// result.checksum - string 
// results.ranked_tokens - the original array

The checksum would be generated from an alphabetically (ascending) sorted array of all of the attributes that will be included for ranking (e.g. numeric traits would be omitted)

Nowayout247

Describe the bug
A clear and concise description of what the bug is.

Information

  • Collection link/name
  • Contract standard
  • Chain

To Reproduce
Steps to reproduce the behavior:

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment

  • OS: [e.g. iOS]
  • Python [e.g. 3.10.3]
  • Library Version [e.g. 1.0]

Additional context
Add any other context about the problem here.

Support for collections that skip token ids

Describe the bug
Some NFT collections have a gimmick of skipping token ids like https://opensea.io/collection/mutant-ape-yacht-club. As of the time of posting, there are 19,425 Mutant Apes minted, some have token ids upwards of 10k and even 20k because those have been minted and assigned a token id in an unconventional way.

open_rarity.resolver.opensea_api_helpers.open_rarity.resolver.opensea_api_helpers.get_token_ids assumes that the collection's token ids are incremental from 0 to 19,424.

Information

Collection link/name: https://opensea.io/collection/mutant-ape-yacht-club
Contract standard: ERC721Enumerable
Chain: Ethereum Mainnet

To Reproduce
Steps to reproduce the behavior:
Run python -m scripts.score_real_collections mutant-ape-yacht-club

Expected behavior
It should have all token ids in the generated json file.

Environment

  • OS: Ubuntu 22.04.1
  • Python 3.10.6
  • Library Version v0.4.3-beta

Missing Rarity Data?

Here is promised update with the OpenSea API that provides rarity data right now on the Assets API endpoint . We are still figuring out how to support other languages than python, but this should give you a straight integration point for TS.

Should we expect collections to have rarity data? I made a quick test with a BAYC item here:

https://opensea.io/assets/ethereum/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d/5118

However, the rarity data is null? Should this be expected?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.