GithubHelp home page GithubHelp logo

mx-psi / fa-scraper Goto Github PK

View Code? Open in Web Editor NEW
17.0 4.0 9.0 206 KB

A FilmAffinity web scraper compatible with Letterboxd

License: GNU General Public License v3.0

Dockerfile 1.12% Python 98.88%
hacktoberfest filmaffinity letterboxd

fa-scraper's Introduction

filmAffinity to Letterboxd

(Versión en español)

Generates CSV file compatible with Letterboxd diary importer from FilmAffinity user's data given their ID.

This program is intended for personal use only; please ensure the person you are getting the data from consents to it beforehand and check which privacy and data protection regulations might apply before using the program to get data from other people.

Installation

Using pip

You can install fa-scraper using pip (Python 3.5+):

python3 -m pip install fa-scraper

Then run

fa-scraper [--csv FILE] [--lang LANG] id

Using Docker

You need to install Docker. Once installed, run:

docker run --name fa-container fascraperdev/fascraper fa-scraper id
docker cp fa-container:/*.csv .
docker rm fa-container`

Getting your IDs

In order to get your FilmAffinity data you need to find out what your FilmAffinity ID is. There are different IDs for your user ratings and your lists.

How to get your user id

Go to your profile page and copy the user_id field from the URL:

filmaffinity.com/es/userratings.php?user_id=XXXXXX

How to get a list id

Go to the list pages (in the left menu), and access the list you want (it needs to be public).

You need to copy the list_id field from the URL:

filmaffinity.com/es/mylist.php?list_id=XXXXXX

Options

  • --list LIST sets ID of the public list you want to export
  • --csv FILE sets CSV export file name to FILE
  • --lang LANG sets language to LANG. Letterboxd importer works best in English, the default option.

Run fa-scraper --help to see further options.

fa-scraper's People

Contributors

arnewgonz avatar deepsourcebot avatar dependabot[bot] avatar diegoasterio avatar jmcruzblazquez avatar mx-psi avatar sasha-wiki avatar sergiom371 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fa-scraper's Issues

"TypeError: 'NoneType' object is not callable" when trying to download lists

I'm unable to download lists due to this error:

OS: Windows 10

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\PC\AppData\Local\Programs\Python\Python312\Scripts\fa-scraper.exe_main
.py", line 7, in
File "C:\Users\PC\AppData\Local\Programs\Python\Python312\Lib\site-packages\fa_scraper\cli.py", line 78, in main
save_to_csv(data, fieldnames, export_file)
File "C:\Users\PC\AppData\Local\Programs\Python\Python312\Lib\site-packages\fa_scraper\fa_scraper.py", line 217, in save_to_csv
for d in dicts:
File "C:\Users\PC\AppData\Local\Programs\Python\Python312\Lib\site-packages\fa_scraper\fa_scraper.py", line 174, in get_list_data
"Year": title.next_sibling.strip()[1:-1],
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable

Thanks for your help.

Remove `locale` dependency

We only use locale for time parsing.
If we do our own parsing we avoid having to deal with it, which is a big pain point for people.

Instrucciones más claras para novatos

Un novato por acá, al intentar correr pip da error, bajé Docker pero no paso del comando docker cp fa-container:/*.csv . ¿qué se supone que debo hacer? Pasó un rato para que mi intuición sustituyera el id del primer comando con mi di de FilmAffiinity pero con ese tercer comando no puedo. ¿Sería posible hacer unas instrucciones un poco más detalladas? gracias. Mi id es 2782367 en caso de que alguien me pueda ayudar a a finalizar el trabajo que no pude finalizar.
Edit:corregí algunos typos e incongruencias

Release `fa-scrapper` bootstrap version

I renamed the package from fa-scrapper to fa-scraper. To ensure people keep receiving the latest version if they install fa-scrapper, I have to do a fa-scrapper release that depends on fa-scraper and does nothing else.

Bug parsing uncommon character

Hi,

While trying to scrap the data for this film https://www.filmaffinity.com/es/film152926.html, this exception was thrown:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\xxxxx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\Scripts\fa-scrapper.exe\__main__.py", line 7, in <module>
  File "C:\Users\xxxxx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fa_scrapper\cli.py", line 78, in main
    save_to_csv(data, fieldnames, export_file)
  File "C:\Users\xxxxx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fa_scrapper\fa_scrapper.py", line 218, in save_to_csv
    writer.writerow(d)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.496.0_x64__qbz5n2kfra8p0\Lib\csv.py", line 164, in writerow
    return self.writer.writerow(self._dict_to_list(rowdict))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.496.0_x64__qbz5n2kfra8p0\Lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\u014d' in position 40: character maps to <undefined>

which I guess it's for the character in director's name Shin'ichirō Watanabe: I've tried to delete my vote in filmaffinity and re-process it and because it wasn't there anymore to scrap, it worked correctly.

Anyway, thank you very much for your amazing work

Create a python package for the core library

At the moment we are mixing the scrapping and the console utility.

If we want to publish the scrapping as a python package, first of all we need to split the fa-scrapper.py in two different files at least.

  1. The code that make the scrapping and get the data (This is gonna be our python package)
  2. The console utility

This will open the opportunity for example to make a web interface to make more easy the export of the information.

Don't ignore TV films (TV) and shorts (S) by default

At the moment we are skipping the tv films (TV) and shorts (S).

I think the best practice is to process all and a new option to specify what to ignore explicit.

Also I think can be a better improvement to remove the (TV) or (S) from the title to make the import more easy.

Error with "locale", cant parse data

I have this error: ValueError: unknown locale: en-US
I already have the "locale" package installed so I dont know what else could be.
Last error line: File "C:\Users\vspc\AppData\Local\Programs\Python\Python39\lib\locale.py", line 501, in _parse_localename raise ValueError('unknown locale: %s' % localename)

Version Python 3.9.2
Windows 10

Publish the image to docker hub

The Docker Hub is the official docker registry for the images.

Every time we make a commit to master the new image need to be build and published, so with github actions we need to build and publish the image with a different tag every time

Interactive disambiguation

Sometimes the Letterboxd importer has trouble knowing which film the user is trying to import among different films with the same name. Since Letterbox uses IMDB data internally, we could use IMDB data to disambiguate which film the user has rated.

A possible approach would be to:

  1. Generate an index of films using an IMDB dataset, either at runtime or pregenerated and stored here.
  2. Create an UI to help the user decide which film they are referring to among the ones with the same name.

Improve errors

Provide sensible error messages:

  • when the ID is incorrect
  • when there is an unexpected network error
  • when there is an unexpected parsing error

Improve exception catching. Since the refactor to generators some except blocks are useless in their current position.

Ideally, all errors should show a message indicating users to report their issues here.

Create browser extension to run fa-scraper

With minor adjustments it looks like fa-scraper can be run on a browser via Pyodide. However, FilmAffinity does not allow doing cross-origin requests, so we would have to create a browser extension that has permissions over the FilmAffinity.

The extension could be inactive unless you are on your profile page (e.g. by checking the URL and that the name on the list matches the name on the top navigation bar) and provide a way to download the CSV.

Error al crear fichero csv

Parece que el error es parecido al aquí ya solucionado #66

Lanzando esta vez el comando: fa-scraper --csv films.csv 834871

Add end to end test suite

  • Add test suite on Linux
  • Add test suite on macOS (needs fix on locale)
  • Add test suite on Windows (needs fix on locale?)

Write about GDPR alternative

When fa-scrapper was originally written the EU GDPR was not a thing. Now, users can write to FilmAffinity and it is legally obligated to give a machine-readable dump of the user data and transmit the data to Letterboxd (article 20) in the span of a month under normal conditions (article 12).

fa-scrapper could have a document with this alternative in both Spanish and English so that people know about this alternative.

Error al crear el fichero csv

Buenas tardes!

Primero gracias por crear fa-scrapper.

Lo he utilizado y me da los siguientes errores:
fa-scraper --csv gara 9120744
Traceback (most recent call last):
File "C:\Users\wongk\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\wongk\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\wongk\AppData\Local\Programs\Python\Python310\Scripts\fa-scraper.exe_main
.py", line 7, in
File "C:\Users\wongk\AppData\Local\Programs\Python\Python310\lib\site-packages\fa_scraper\cli.py", line 78, in main
save_to_csv(data, fieldnames, export_file)
File "C:\Users\wongk\AppData\Local\Programs\Python\Python310\lib\site-packages\fa_scraper\fa_scraper.py", line 218, in save_to_csv
writer.writerow(d)
File "C:\Users\wongk\AppData\Local\Programs\Python\Python310\lib\csv.py", line 154, in writerow
return self.writer.writerow(self._dict_to_list(rowdict))
File "C:\Users\wongk\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u014d' in position 44: character maps to

Me podrías ayudar?

Muchas gracias y saludos!!

Enforce style

Add Python style checker via Github actions and guideline on proper style.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.