GithubHelp home page GithubHelp logo

tristanbilot / bqfetch Goto Github PK

View Code? Open in Web Editor NEW
27.0 3.0 3.0 196 KB

A lightweight tool to fetch tables from BigQuery as pandas DataFrame very fast using BigQuery Storage API combined with multiprocessing

License: MIT License

Python 100.00%
bigquery python gcp bigquery-storage-api fetch multiprocessing database data-science analytics

bqfetch's Introduction

Last commit Languages Release date
Python version Python version

bqfetch

A lightweight tool to fetch tables from BigQuery as pandas DataFrames very fast using BigQuery Storage API combined with multiprocessing. This module also aims to fetch large tables that cannot fit into memory, by chunking the table in a smart and scalable way.

Installation

pip install bqfetch
pip install -r requirements.txt

Algorithm

  • Fetch all distinct values from the given index column.
  • Divide these indices in chunks based on the available memory and the number of cores on the machine.
  • If multiprocessing:
    • Each chunk will be divided in multiple sub-chunks based on the parameter nb_cores and the available memory.
    • For each sub-chunk, create a temporary table containing all the matching rows in the whole table.
    • Fetch these temporary tables using BigQuery Storage as dataframes.
    • Merge the dataframes.
    • Delete temporary tables.
  • If !multiprocessing:
    • Same process with only one temporary table and no parallel processes created.

Use case

Fetching a huge table of users using multiple cores

id Name Age
187 Bartolomé 30
188 Tristan 22
... ... ...
>>> table = BigQueryTable("PROJECT", "DATASET", "TABLE")
>>> fetcher = BigQueryFetcher('/path/to/service_account.json', table)
>>> chunks = fetcher.chunks('id', by_chunk_size_in_GB=5)

>>> for chunk in chunks:
        df = fetcher.fetch(chunk, nb_cores=-1, parallel_backend='billiard')
        # ...
  • First, we have to create a BigQueryTable object which contains the path to the BigQuery table stored in GCP.
  • A fetcher is created, given in parameter the absolute path to the service_account.json file, the file is mandatory in order to do operations in GCP.
  • Chunks the whole table, given the column name and the chunk size. In this case, choosing the id column is perfect because this each value of this column appears the same number of times: 1 time. Concerning the chunks size, if by_chunk_size_in_GB=5, each chunk that will be fetched on the machine will be of size 5GB. Thus it has to fit into memory. You need to save 1/3 more memory because the size of a DataFrame object is larger than the raw fetched data.
  • For each chunk, fetch it.
    • nb_cores=-1 will use the number of cores available on the machine.
    • parallel_backend='billiard' | 'joblib' | 'multiprocessing' specify the backend framework to use.

Fetch by number of chunks

It is also possible to use by_nb_chunks instead of by_chunk_size_in_GB. It will divided the table in N, so you cannot control more flexibly the size of each chunk.

>>> table = BigQueryTable("PROJECT", "DATASET", "TABLE")
>>> fetcher = BigQueryFetcher('/path/to/service_account.json', table)
>>> chunks = fetcher.chunks('id', by_nb_chunks=10)

>>> for chunk in chunks:
        df = fetcher.fetch(chunk, nb_cores=-1, parallel_backend='billiard')
        # ...

Verbose mode

>>> chunks = fetcher.chunks(column='id', by_nb_chunks=1, verbose=True)
# Available memory on device:  7.04GB
# Size of table:               2.19GB
# Prefered size of chunk:      3GB
# Size per chunk:              3GB
# Nb chunks:                   1
  
# Nb values in "id":           96
# Chunk size:                  3GB
# Nb chunks:                   1
  
>>> for chunk in chunks:
>>>        df = fetcher.fetch(chunk=chunk, nb_cores=1, parallel_backend='joblib', verbose=True)
# Use multiprocessing :        False
# Nb cores:                    1
# Parallel backend:            joblib

# Time to fetch:               43.21s
# Nb lines in dataframe:       3375875
# Size of dataframe:           2.83GB

Warning

We recommend to use this tool only in the case where the table to fetch contains a column that can be easily chunked (divided in small parts). Thus the perfect column to achieve this is a column containing distinct values or values that appear ~ the same number of time. If some values appear thousands of times and some only fews, then the chunking will not be reliable because we need to make the assumption that each chunk will be approximatively the same size in order to estimate the needed memory necessary to fetch in an optimize way the table.

A good index colum:

This column contains distinct values so can be divided in chunks easily.

Card number
4390 3849 ...
2903 1182 ...
0562 7205 ...
...

A bad index colum:

This column can contains a lot of variance between values so the chunking will not be reliable.

Age
18
18
64
18
...

More cores != faster

I remind you that adding more cores to the fetching process will not necessarily gain performance and most of the time it will even be slower. The reason is that the fetching is directly dependent on the Internet bandwidth available on your network, not the number of working cores or the computer power. However, you can do your own tests and in some cases the multiprocessing can gain time (ex: in the case where cloud machines allow only an amount of bandwidth by core, multiplying the number of cores will also multiplying the bandwidth, ex: GCP compute engines).

Contribution

bqfetch is open to new contributors, especially for bug fixing or implementation of new features. Do not hesitate to open an issue/pull request :)

License

MIT

Copyright (c) 2021-present, Tristan Bilot

bqfetch's People

Contributors

tristanbilot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

bqfetch's Issues

bqfetcher json file path

Hi Team,
I'm trying to use bqfetch library to fetch bigquery table.
I would like to know whether what file are you referring to . can you elaborate more about and where can I JSON file in my project.
since I dont see enough info in github repo, raising this request.
fetcher = BigQueryFetcher('/path/to/service_account.json', table)

Thanks in advance.
Regards
Nirmal NK

service account and dataset in different projects

Hello,

thank you very much for the library, I agree it seems to be very promising!

Is it possible to add the project id as a parameter of the BigQueryClient?

Sometimes the project where the service account was created and the project of the dataset to be accessed may be different, for example in my case it fails due to the lack of the correct roles/permissions in the service account project, so the following piece of code of bqfetch.py fails:

          bq_client = bigquery.Client(
               credentials=credentials,
               project=credentials.project_id
           )

while using directly bigquery.Client():

bigquery.Client(credentials=credentials, project=bq_project_id)

works correctly.

Thank you,
Steven

InvalidChunkRangeException

Hello,
Thank you for a such useful library!

I am trying to fetch the table as it mentioned on the column that contains distinct values, but anyway getting an error:

InvalidChunkRangeException: Difference of range between elements of column datetime_tzutc is too high: more than 25.0% of elements are too far from the mean.

Could you, please, give me advices how should I handle this case?
Thank you in advance!

Best,
Veniamin

Support for fetching only specific partition(s)

Hi @TristanBilot, first of all thanks for creating this library - seems to be very promising!
I was exploring using it in one of my projects and realized that it would be great if it supported fetching data only from specific partition(s) as opposed to the whole table.

Do you have any plans on adding that feature?

Error when importing bqfetch

Hi everyone !

I'm trying to read a big table from BigQuery using python in google colab and I found bqfetch, however, when I try to import BigQueryFetcher and BigQueryTable I get an error.
I installed it by doing:

!pip install bqfetch
!pip install -r requirements.txt

But when running the second command, I get this error:

ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

Then, my code is this:

from bqfetch import BigQueryFetcher, BigQueryTable

table = BigQueryTable("PROJECT", "DATASET", "TABLE")
fetcher = BigQueryFetcher('/path/to/bq_service_account.json', table)
chunks = fetcher.chunks('id', by_chunk_size_in_GB=2)

for chunk in chunks:
    df = fetcher.fetch(chunk, nb_cores=1, verbose=True)

Am I doing something wrong ? Because this is what I get:

Captura de pantalla 2023-05-08 a las 16 23 49

Some help would be appreciated! Because I cannot run anything so I can't get the table I need in order to have as a dataframe in python :(

Thank you in advance!
Marina

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.