GithubHelp home page GithubHelp logo

mc-cat-tty / placerank Goto Github PK

View Code? Open in Web Editor NEW
3.0 2.0 1.0 46.34 MB

Final assigment for "Gestione dell'Informazione" ("Search Engines") course @ UniMoRe

Jupyter Notebook 94.23% Python 5.77%
airbnb benchmarking bert-embeddings datasets huggingface huggingface-transformers information-retrieval insideairbnb-data masked-language-models ncurses

placerank's Introduction

PlaceRank

Search engine for AirBnB listings.

Final assignment of the "Gestione dell'informazione" course at University of Modena and Reggio Emilia. Academic year 2023-2024.

Bringup

At least version 3.11 of the Python interpreter is needed.

In order to enjoy our not-so-SOTA search engine, the average user needs to run the following commands in a shell where the Python interpreter is available:

# INSTALL DEPENDENCIES
python3 -m pip install -r requirements.txt

# DOWNLOAD DATASET, CREATE INDEX, DOWNLOAD WORDNET AND BERT MODEL
python3 -m setup

Please, be aware that bert-large-uncased-whole-word-masking can take up to 1.5 Gb of disk space and 30 min to download.

The model is by default stored in hf_cache folder.

For experienced user, we suggest to firstly crate a virtual environment, where all packages will be installed; then follow the above procedure:

python3 -m venv venv
source venv/bin/activate

Usage

The Placerank project embraces different modules, each of them with a specific purpose, usually self-explanatory. The most significant ones are:

  • ir_model, models, sentiment and query_expansion modules: contain some models and services that the user can experiment with through the following blocks
  • tui package: contains view, presenter, event dispatcher and all the logic that is under the ui's hood
  • benchmark module: contains the implementation of some popular benchmarking metrics
  • preprocessing, dataset, views, config modules: contain the building blocks and convenience functions/classes for the entire project

TUI

The TUI - Terminal User Interface - is the front-end for our project. Launch the following command with a terminal window big enough:

python3 -m placerank

In case of any doubt about the interface visit help page.

Note that the application can take up to some seconds to load, especially at the first run.

Common Exceptions

urwid.widget.widget.WidgetError: ... canvas when passed size .... This class of errors usually means that the terminal window is too small for the TUI to be rendered.

Benchmarks

The Benchmark module is designed to test the performance of an index against predefined queries. It includes functionality to load a benchmark dataset, test an index against the queries, and compute various evaluation metrics such as recall, precision, precision at ranking r, average precision, mean average precision, F1 score, and the E-measure.

To use the Benchmark module, follow these steps:

Setup benchmarks:

python3 -m setup_benchmarks

Create a Benchmark object:

bench = Benchmark()

Open the index:

ix = open_dir("index/benchmark")

Test the benchmark against the index. This is required to compute different metrics on the benchmark.

bench.test_against(ix)

Print or use the computed metrics by using the object methods:

print(bench.precision())
print(bench.recall())
print(bench.precision_at_r())
print(bench.precision_at_recall_levels())
print(bench.average_precision())
print(bench.mean_average_precision())
print(bench.f1())
print(bench.e())

Calling the module placerank.benchmark from the command line computes all of the metrics above for the "index/benchmark" index, which is an inverted index built on InsideAirbnb Cambridge listings.

Reviews

The reviews dataset is used to compute the sentiment metric for each listing. Recent reviews have a major weight on the score than older ones.

To compute sentiment for each review, use the function build_reviews_index of placerank.dataset to build the dataset of reviews. The function initializes a defaultdict where keys are listing IDs, and values are lists of tuples containing review information.

The dataset will be saved in a reviews.pickle file, to load it call the function load_reviews_index.

Contributors

  • Corradini Giulio
  • Mecatti Francesco
  • Stano Antonio

placerank's People

Contributors

ent0n29 avatar giuliocorradini avatar mc-cat-tty avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

ent0n29

placerank's Issues

Pre-ship checks

  • Check installation process (requirements, setup.py)
  • Check TUI
  • Check benchmarks

Presentation

  • EDA - Exploratory Dataset Analysis. Eg. unbalanced reviews sentiment
  • Objectives and functional requirements. Assumptions and user's degrees of freedom (TUI).
  • System architecture
  • Basic retrieval models
  • Weighting models
  • Query expansion: global with BERT vs WordNet, local with Whoosh (?)
  • Sentiment analysis
  • Show results
  • Future improvements: Clustering tree 4 w2v (embeddings from BIG LLM, not BERT)

Cache listings

Cache listings when first downloading it, since different inverted index buildings require to download the dataset each time and on slow connections can take up to a minute of total processing time.

Support runtime edit of Whoosh analyzer

To speed up things, we should provide a convenient method to change the corpus analyzer on the fly.

Right now you have to redefine the getDefaultAnalyzer, which is referenced in placerank.logic_views.DocumentLogicView but this makes it impossible to change the default analyzer at runtime.

We could change the function code for getDefaultAnalyzer by making its func_code field referencing the func_code of another ad-hoc function, but I think this is somewhat inelegant and convoluted.

Maybe let's make a factory that returns the appropriate analyzer. Another problem arises: the call to getDefaultAnalyzer, or this virtual factory, is inside the constructor of a class field. I think we should move the inverted index schema outside and create a separate class.

Sentiment analysis text preprocessing

An exception is raised by the sentiment analyzer when the input text dimension is larger than the maximum allowed (which is 512 characters). Should we define a light preprocessing for reviews to remove markup and other characters that don't influence the sentiment?

The raised exception is in detail:

Token indices sequence length is longer than the specified maximum sequence length for this model (647 > 512). Running this sequence through the model will result in indexing errors

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.