GithubHelp home page GithubHelp logo

avnsx / fansly-downloader Goto Github PK

View Code? Open in Web Editor NEW
1.2K 35.0 63.0 202 KB

Easy to use fansly.com content downloading tool. Written in python, but ships as a standalone Executable App for Windows too. Enjoy your Fansly content offline anytime, anywhere in the highest possible content resolution! Fully customizable to download in bulk or single: photos, videos & audio from timeline, messages, collection & specific posts ๐Ÿ‘

Home Page: https://fansly.com/

License: GNU General Public License v3.0

Python 100.00%
fansly datascraping python fansly-scraper fansly-downloader fansly-download windows macos linux image-download

fansly-downloader's People

Contributors

avnsx avatar pawnstar81 avatar upanddown666 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fansly-downloader's Issues

Name files based on post text content?

Hi! I was wondering if it would be possible to scrape the text posted alongside media on a post and name the files accordingly? Something like [POST_TEXT]-1 ([FILENAME]).[EXT], [POST_TEXT]-2 ([FILENAME]).[EXT] for 2 pieces of media attached to the same post? Either way, really appreciate the project! Thank you!

i can't downlaod the content in private messages - Fansly

Hello Avnsx=) I Want Thank You For Your project about Fansly, is working well=)
But it seems to me that the program is not working for the content that is in the messages, i hope/wish you get a solution for this situation=)
Continuation of a good work

Images registered but not downloaded

I have this running off of the windows .exe file. Admin replaces what my user is, and Creator replaces the different creator I'm looking to scrape from. I've had this happen with multiple creators, which is why I generalized it. I have moved the Fansly scraper to my desktop and made the creator folder in that folder as well as used the auto configurator to ensure that everything is captured appropriately. Each creator folder includes a photos and videos folder, and I have my antivirus turned off for this.

My code return starts out as such:

 Info | 19:05 || Reading config.ini file ...
 Info | 19:05 || Targeted creator: "Creator"
 Info | 19:05 || Using user-agent: "Mozilla/5.0 (Windows NT 10.0 [...] 0.2) Gecko/20100101 Firefox/112.0.2"
 Info | 19:05 || Open download folder when finished, is set to: "False"
 Info | 19:05 || Downloading files marked as preview, is set to: "True"
 WARNING | 19:05 || Previews downloading is enabled; repetitive and/or emoji spammed media might be downloaded!
 WARNING | 19:05 || Update recent download is enabled

 WARNING | 19:05 || 'Creator_fansly' folder is not located in the local directory; but you launched in update recent download mode,
                    so find & select the folder that contains recently downloaded 'Photos' & 'Videos' as subfolders (it should be called 'Creator_fansly')
 Info | 19:05 || Chose folder path C:/Users/Admin/Desktop/Fansly_Scraper/Creator_fansly
 Info | 19:05 || Finished hashing! Will now compare each new download against 0 photo & 0 video hashes.
 Info | 19:05 || No scrapeable media found in mesages
 Info | 19:05 || Started profile media download; this could take a while dependant on the content size ...
 Info | 19:05 || Inspecting most recent page
 Info | 19:05 || Downloading Image '2023-04-30 12-54-44 509041386499092480 preview.jpeg'
 Info | 19:05 || Downloading Image '2023-04-30 12-54-44 509041386499092480.jpeg'

The end of the scraper returns as such:

| Info | 19:15 || Downloading Image '2022-11-19 13-10-05 450338417373360128.png'
 Info | 19:15 || Downloading Image '2022-11-19 13-10-05 450338417373360130.png'
 Info | 19:15 || Downloading Image '2022-11-19 13-10-05 450338417373360129.png'
 Info | 19:15 || Downloading Image '2022-11-18 12-28-07 449965467671470080.png'
 Info | 19:15 || Inspecting page: 449965469357580288

โ•”โ•
  Done! Downloaded 0 pictures & 0 videos (0 duplicates declined)
  Saved in directory: "C:\Users\Admin\Desktop\Fansly_Scraper\C:/Users/Admin/Desktop/Fansly_Scraper/Creator_fansly"
  โœถ Please leave a Star on the GitHub Repository, if you are satisfied! โœถ            โ•โ•

This is all I've been able to get from the exe file. If there's something I can do to give more information please let me know. Thank you for any assistance that can be provided in this instance.

'Update recent download' hash process uses massive amounts of RAM

These lines

https://github.com/Avnsx/fansly/blob/642259d03da67b52ab188a7532cb2ab0d2afa2c8/fansly_scraper.py#L180-L182

read the entire video into memory, which might be multiple GB. Files should instead be read in chunks and the hasher updated with each chunk.

This area

https://github.com/Avnsx/fansly/blob/642259d03da67b52ab188a7532cb2ab0d2afa2c8/fansly_scraper.py#L208-L224

does this concurrently. The default number of threads is min(32, 5 * vcores).

I attempted to update a recent download and the python process used >25GB of RAM, leading to my OS to fail and need a reboot because system processes ran out of memory.

As a hotfix, with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor: appears to work (probably only as long as none of the files to be hashed exceed total free RAM in size). Memory usage peaked at ~4GB.

Multi-threaded I/O is generally only faster on SSDs, it's slower on hard drives.

Every time the .exe is run, a new 160 MB folder is created in /AppData/Local/Temp

I'm not sure why exactly this is happening, but as the title states, every time I start a new run of "Fansly Scraper.exe" it creates a brand new folder inside of AppData/Local/Temp. Every one is filled with the same python files / libraries. It is incredibly annoying... I was wondering what was eating up my storage so much.
Otherwise, the program works.
I'm using version 0.3.3.0
image

EOF Error

The application crashed and provided me with an error and instructions as I'm sure you know because you programmed it. Here is the error. I'm not sure what extra information I can provide. I was using previously downloaded files from .2 and tried the new update old download folder option. Let me know if you need more information.

Traceback (most recent call last):
  File "urllib3\connectionpool.py", line 703, in urlopen
  File "urllib3\connectionpool.py", line 398, in _make_request
  File "urllib3\connection.py", line 239, in request
  File "http\client.py", line 1282, in request
  File "http\client.py", line 1328, in _send_request
  File "http\client.py", line 1277, in endheaders
  File "http\client.py", line 1037, in _send_output
  File "http\client.py", line 998, in send
  File "ssl.py", line 1236, in sendall
  File "ssl.py", line 1205, in send
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2384)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "requests\adapters.py", line 440, in send
  File "urllib3\connectionpool.py", line 785, in urlopen
  File "urllib3\util\retry.py", line 592, in increment
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='cdn2.fansly.com', port=443): Max retries exceeded with url (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2384)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "fansly_scraper.py", line 304, in <module>
  File "requests\sessions.py", line 542, in get
  File "requests\sessions.py", line 529, in request
  File "requests\sessions.py", line 645, in send
  File "requests\adapters.py", line 517, in send
requests.exceptions.SSLError: HTTPSConnectionPool(host='cdn2.fansly.com', port=443): Max retries exceeded with url  (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2384)')))

download logger

can you impliment a way that the downloader can log what files it has downloaded previously so the next time you run the downloader it wont re download the same files again?

NameError: name 'group_id' is not defined

Getting the below error when running the standalone executable. Running the application on a Windows 10 Virtual Machine in the root of the drive I am saving the data to's directory. Tried running as admin, compatibility mode, etc, did I miss a prerequisite download or something?

Info | 15:30 || Creating download directories ...
Traceback (most recent call last):
File "fansly_scraper.py", line 234, in
NameError: name 'group_id' is not defined
[6680] Failed to execute script 'fansly_scraper' due to unhandled exception!

No downloading takes place.

Good morning.
I'm contacting you because I'm having a problem downloading media.
Indeed, I paid a subscription to the account & when I initiate a scrape, no file is downloaded.

I have the following return:

  Information | 08:25 || Reading config.ini file ...
sh:1:title:not found
  Information | 08:25 || Targeted creator: 
  Information | 08:25 || Using user-agent: "Mozilla/5.0 (Windows NT 10.0 [...] 102.0) Gecko/20100101 Firefox/102.0"
  Information | 08:25 || Open download folder when finished, is set to: "False"
  Information | 08:25 || Downloading files marked as preview, is set to: "True"
  WARNING | 08:25 || Previews downloading is enabled; repetitive and/or emoji spammed media might be downloaded!
  Information | 08:25 || Acknowledging custom basis download directory: "/mnt/ABC"

  Information | 08:25 || Automatically detecting whether download folder exists
  Information | 08:25 || Creating download directories ...
  Information | 08:25 || Started messages media download ...
  Information | 08:25 || Inspecting message: 49535991
  Information | 08:25 || Inspecting message: 49535991
  Information | 08:25 || Started profile media download; this could take a while depending on the content size ...
  Information | 08:25 || Inspecting most recent page
Killed

Where does the "Killed" error come from?

No longer downloading new content

Version: Latest (Ran updater to check)
Model: indigowhite

Everything seems to work correctly but it won't download a post set made in the last 24hrs. It downloaded a set on the 21st just fine but the new set it's like it doesn't see it. Happy to do any testing needed.

Content getting falsely flagged as previews

Version 0.4 is not downloading videos from 2 most recent posts for a specific model. It does appear to download all other media from this model. I have had no issue downloading from a few other models.

Videos from this model require an active subscription for access - I can provide an account with access for troubleshooting.

For testing I tried 0.3.5 which does download the videos, but at 720p becuse of that previous issue that was resolved in 0.4

Issue on macos

On macos, i am getting an issue.

  File "fansly_scraper.py", line 48
    output(2,'\n [2]ERROR','<red>', f'"{e}" is missing or malformed in the configuration file!\n{21*" "}Read the Wiki > Explanation of provided programs & their functionality > config.ini')
                                                                                                                                                                                           ^
SyntaxError: invalid syntax

I have not modified any of your code except editing the required fields in config.ini

Cannot download media from messages

i always get "No scrapeable media found in mesages" but the timeline posts are downloading, is it still to be implemented or it's a bug related to my configuration? i used the automatic config, and i'm using the exe ones (0.3.5)

Bug with message scraper in 0.4

I'm using 0.4 with python.
I get the following error from the messages scraper and I haven't unlocked the most recent video, which led me to assume that was the issue.
Other creators messages work and timeline works for that creator.

Traceback (most recent call last):
  File "G:\fansly-main\Fansly_Downloader.py", line 1085, in <module>
    contained_posts += [parse_media_info(obj)]
                        ^^^^^^^^^^^^^^^^^^^^^
  File "G:\fansly-main\Fansly_Downloader.py", line 718, in parse_media_info
    if all([default_normal_height, highest_variants_resolution_height, default_normal_height > highest_variants_resolution_height]):
                                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>' not supported between instances of 'NoneType' and 'int'

Getting Error When Running

When I run the scraper I get the error " "'TargetedCreator'" is missing or malformed in the configuration file! Read the ReadMe file for assistance."

I copied the authorization and User-Agent from the dev tools as explained in the ReadMe but I still get the error. Should they be in quotes or a space between the = and the information I am putting into the config file?

compability with os.startfile

I've recently noticed that https://github.com/Avnsx/fansly/blob/3f7aabaf8000cbf5636600159f64f6e4cb0d55c3/fansly_scraper.py#L189
is incompatible with any other OS then Windows, which means that the code will exit out on other operating systems erroring; once it has come to the end of the file instead of waiting 120 seconds before closing. This is a minor issue, since it is located at the end of the entire work process, but still something to pay attention to for v0.3.

Would be happy if anyone could come up with better solutions then os.startfile I personally think there's no way around doing an OS check, and then using open (macOS), xdg-open (Linux), or os.startfile() for Windows

Scraper Not Downloading Media From Messages

As title suggests, I am having trouble downloading media from messages. The scraper provides the line "No scrapable media found in messages" even though there is plenty of media to download from messages. Please help.

Not downloading anything?

i am subbed to a few on fansly + have some with follower content.
i tried setting up the config my self and with the automatic configurator and both with the same results

Done! Downloaded 0 pictures & 0 videos (0 duplicates declined)

I'm not quite sure what it could be

Readme misspelling and gramatical errors

Hello, I've found some gramatical errors and misspelling erros in the following lines:

  • Line 25
    It says ... the scraper itsself. Only ...
    Should be ... the scraper itself. Only ...
  • Line 28:
    It says ... Windows & you to have to have recently logged into ...
    Should be ... Windows & you have to have recently logged into ... or ... Windows & you need to have recently logged into ...
  • Line 57
    It says ... and macOS compability to automatic ...
    Should be ... and macOS compatibility to automatic ...
  • Line 61
    It says ... content is arleady scraped ...
    Should be ... content is already scraped ...

I hope you can review these parts and consider fixing them.
Congrats for your amazing project!

Login request no longer returns session

https://github.com/Avnsx/fansly/blob/57e90741b1b4a120b85791424bf0386ba5c4d14e/automatic_configurator.py#L227
Is no longer capable of doing the job of getting the auth token, because the login post request is not returning it anymore. This is a very specific counter patch to this repository, that I can very surely assume they've just done this change to break my scraper.

Right now my browser imitation method is still working, which is covering up for this issue, most of the time(https://github.com/Avnsx/fansly/discussions/8#discussioncomment-2708242).

I'll fix this in a re-write of the automatic configurator, which would've had to come in the upcoming weeks anyways.

Fansly upgrading their API from v2 to v3

As mentioned in the title, fansly will be upgrading to v3 APIs soon. This most likely means that the v2 API is going to be turned off by a matter of time, this will result in the scraper not working anymore in its current form (as even right now the v2 API is barely being used anymore, just a couple endpoints are not upgraded yet). Once they're done entirely upgrading their API to v3, I'll release a huge patch for fansly scraper that addresses multiple Issues from the past that not only the scraper had, introduce new features like custom download directory and also update the scraper for the usage of the new V3 API.

Only downloading videos

I last used the previous version of this software a few months ago successfully on an account. However i have updated just now and tried it on the same account with and without update recent download enabled and it only downloads the videos. I have also started fresh by removing the old download directory from the folder to see if it downloads everything and it only downloads the videos still.
Thanks

posts being deleted

It seems like every time I scrape, when the user deletes their posts, it also deletes it from my folder. Why does this happen? Isn't the point of scraping to keep the files that are uploaded no matter if it gets deleted or not?

duplicates

hi! have some question...
how can script detect duplicates? after downloads, they type "48 duplicates", but when i inspect model's page, not found anyone duplicates..
what script does mean?

Downloading already existing files

Every time I run the script it downloads all the media files from the creator.

The files are named for example:
Pictures/1-323_{mycreator}.png
Videos/1-707_{mycreator}.mp4

If I run the script again, the same files are downloaded but with the names:
Pictures/1-483_{mycreator}.png
Videos/1-327_{mycreator}.mp4

I'm on Ubuntu 20.04.

Scraper Skipping Downloads

Thanks for the app!

The scraper seems to skip downloading some images if they're too similar to another image in the same post. For example, the model is standing in a doorway facing left; if there's another picture with the model not moving much (facing a different direction) then it only downloads one of the pictures.

Not a big deal, I was just trying to figure out why it had put less photos in the "Pictures" folder than was available on Fansly.

Download more then one model at a time

can i just download all models consecutively or list more then one manually. If i sub to 5 models and want to run one command to grab them all is that possible

Incomplete download

Hello,

I noticed that the videos and photos older than a specific date (July 14 2022) were not downloaded.

What could be the reason for this?

Is there a way to specify media download for a specific date range?

Thanks and congrats for this wonderful tool.

Doesn't download all posts; Subscribed content is missing

There are posts that aren't downloaded, it also says 218 duplicates declined! Those 218 could be mistaken for a duplicate but they aren't those are missing posts photos and I'm pretty sure.
Wouldn't it be good if there was an option in the config "Put duplicates in separate folder", but it still downloads the duplicates?

Either way it's missing posts!

API Returned unauthorized

This started happening after updating to 0.3.4. The error message would show "Used authorization token" but it's completely different from the actual cookie. Pretty much like this issue.
#30

Bug: downloaded video is 720p while higher res is available

The Fansly scrapper downloaded some videos at 720p resolution,
While 1080p and 2160p were available (and not behind a paywall).

image
image

It also does this for multiple content pieces from other content creators.

I noticed this happens with newer content
content posted in the last 2 weeks

the video downloaded is not complete

the downloader will run without any error, but the downloaded results are only around the first 30 seconds of the whole video, the rest is not downloaded

API returned unauthorized

Hey,
firstly thank you for this, it's really a nice tool to use !
I used it several times without issues. I didnt change anything but now I have this message : [11]ERROR | 20:56 || API returned unauthorized. This is most likely because of a wrong authorization token, in the configuration file.

But I cant change the authorization token.

Can you tell me what to do there ? Thank you very much !

Downloaded video has a 2-seconds of empty space before the start of the video.

This problem is kinda difficult to explain and very random, but I will do my best.

I just ran the download and most of the videos have this time before the start of the video where nothing happens.

Sometimes the video starts instant, and sometimes the video has 2 seconds of nothing at the start.
The audio will also start playing in these 2 seconds of emptiness.

It's not necessarily a massive issue as the video does play after those 2 seconds, but it's definitely not intended.

Error 4

It says my configuration file is malformed but everything is correct

Executable randomly closes; due to being Rate Limited

I leave it running in the background and after some downloads it simply closes (it's far from having downloaded everything). I'm using Windows 11 and set it to run as admin (without admin it also would close after a bit)

New feature suggestion: scrape multiple users at once

Since this scrapper has automatic hash checker, I thought about using it to just automatically scrape all new content without having to actually open Fansly (so I can be extra lazy :))

But the only problem hindering me from being the laziest human imaginable is that I need to change the username in the config.ini every time.

So my suggestion is that the username field can take in an array of usernames,
which can then be looped through, and the rest of the code can be the same.

does not download entire media

ive tried running this several times and it downloads maybe 1/6th of the total images and videos before closing/crashing (??) i dont have the knowledge necessary to offer anything more helpful than that to diagnose the problem, sorry.

Naming files by date posted feature?

Hello! I'm just wondering if it's possible to have an option to add the post date to the file name? With all the ones I've downloaded so far, all of the files seem to be out of order according to the creator's video/image set.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.