GithubHelp home page GithubHelp logo

thanosdim1 / betedge Goto Github PK

View Code? Open in Web Editor NEW

This project forked from nickkatsios/betedge

2.0 0.0 0.0 280 KB

An extensible dockerized full stack sports betting arbitrage application in Python.

License: MIT License

JavaScript 1.16% Python 88.38% CSS 1.42% HTML 0.18% Vue 7.55% Dockerfile 1.30%

betedge's Introduction

CI

██████╗ ███████╗████████╗███████╗██████╗  ██████╗ ███████╗
██╔══██╗██╔════╝╚══██╔══╝██╔════╝██╔══██╗██╔════╝ ██╔════╝
██████╔╝█████╗     ██║   █████╗  ██║  ██║██║  ███╗█████╗  
██╔══██╗██╔══╝     ██║   ██╔══╝  ██║  ██║██║   ██║██╔══╝  
██████╔╝███████╗   ██║   ███████╗██████╔╝╚██████╔╝███████╗
╚═════╝ ╚══════╝   ╚═╝   ╚══════╝╚═════╝  ╚═════╝ ╚══════╝

BetEdge

An extensible dockerized full stack sports betting arbitrage application in Python.

Table of contents

Components

/scrappers

A scrapping framework for collecting odds data from various bookmakers utilizing Selenium and storing them in a MySQL database. Cappable of scrapping 100s of events per minute with support for the top 4 bookmakers in Greece. Urls for events are first extracted from the bookmakers' websites and then the odds are scrapped from the extracted urls. The scrappers are designed to be extensible to support more bookmakers and markets with minimal effort.

Features

  • Concurrent scrapping of multiple events using multiprocessing.
  • Database connection pooling for concurrent database access.
  • Headless scrapping for reduced resource consumption.
  • Scheduler for scrapping events at regular intervals.
  • Logging for debugging and error handling.
  • Support for Firefox and Chrome webdrivers.
  • Email notifications for encountered errors.
  • Extensible to support more bookmakers and markets.
  • Complete documentation and guides for adding new scrappers.
  • Centralized configuration and runtime manager for all scrappers.
  • Data validation and error handling for all scrappers.
  • Uniform data format and standardization in the database.
  • Event matching for the same event across different bookmakers.

Scrapping process

scrapping process

Scrapping quickstart

For a quickstart guide on how to run the scrappers, see the correspondinng README.

/backend

A Django REST API project for serving the odds and event data endpoints. Responsible for finding arbitrage opportunities and calculating the optimal stakes for each bet. Integrated with celery for asynchronous task execution utilizing Redis as a message broker.

Backend quickstart

For a quickstart guide on how to run the backend, see the correspondinng README

/frontend

A sample Vue.js frontend for displaying the odds and arbitrage opportunities. Currently under development, added for building purposes.

Frontend quickstart

For a quickstart guide on how to run the frontend, see the correspondinng README

Database

A MySQL database for storing the odds, url and event data. All sql scripts for creating the schema and adding the initial data are located in the /db/sql folder.

Schema

The schema is comprised of 8 tables:

  • Events: Stores the event data.
  • Urls: Stores the urls for the events.
  • Odds: Stores the odds for the events.
  • Bookmakers: Stores the bookmakers ids and names.
  • Markets: Stores the desired markets to be scrapped.
  • Sports: Stores the sports available for scrapping.
  • Arbitrage: Stores the arbitrage opportunities, what events they appear in and the percentage of the arbitrage.
  • ArbitrageOutcomes: Stores the individual arbitrage legs that comprise an arbitrage opportunity.

A complete schema diagram can be found here.

Quickstart

Docker

The easiest way to run the project is by using docker-compose. To run the project in a docker container, run the following:

  1. Clone the repository:
git clone https://github.com/nickkatsios/BetEdge.git && cd BetEdge
  1. Download and install Docker and Docker Compose.

  2. Create a .env file in the root directory of the project with the variables defined in the .env.example file.

  3. Build the docker image by running:

docker compose up --build

This will spin up containers for all components. Since all services rely on the database, the db container will be initialized first. Healthchecks are used to ensure that the db is ready before the other containers are initialized. The database schema is automatically created and the initial data is added by the db container.

Settin up each service individually

If you want to run each service individually, follow the instructions provided in the corresponding README. The order in which the services should be initialized is the following:

  1. Database --> See the Scrappers README
  2. Scrappers --> See the Scrappers README
  3. Backend --> See the Backend README
  4. Frontend --> See the Frontend README

Notes

  • The project is currently under development and is not ready for production use.
  • Bookmakers may update their websites at any time, which may break the scrappers. If you encounter any errors, you need to update the scrappers accordingly.

Contributing

Pull requests are always welcome. For major changes, please open an issue first to discuss what you would like to change.

betedge's People

Contributors

nickkatsios avatar

Stargazers

Andreas Lampos avatar Thanos Dimitrakopoulos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.