GithubHelp home page GithubHelp logo

ultimahoarder / ultimascraper Goto Github PK

View Code? Open in Web Editor NEW
3.8K 178.0 609.0 2.14 MB

Scrape all the media from an OnlyFans account - Updated regularly

License: GNU General Public License v3.0

Python 98.14% Dockerfile 1.86%
onlyfans datascraping scraper archive

ultimascraper's People

Contributors

aboredpervert avatar adamvorobyov avatar americanseeder1865 avatar andonandon avatar anotherofuser avatar banillasolt avatar casperdcl avatar cclauss avatar digitalcriminal avatar e0911cd45b19686 avatar ecchiecchi0 avatar helopy avatar jonathanunderwood62 avatar kozobot avatar kr33g33 avatar naxolotl avatar notarealemail avatar ood4rkl0rdoo avatar qtv avatar rakambda avatar reahari avatar resokou avatar rybackrulez avatar secretshell avatar sivra-d avatar sixinchfootlong avatar stranger-danger-zamu avatar throwaway-of avatar ultimahoarder avatar zymurnerd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ultimascraper's Issues

ModuleNotFoundError: No module named 'requests'

Im quite the beginner with python.

I have installed all the requirements and have verified that the request module has been downloaded but im still getting this error.

Traceback (most recent call last):
File "C:\OnlyFans-master\OnlyFans.py", line 7, in
import requests
ModuleNotFoundError: No module named 'requests'

AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

Stack trace below, Python 3.7.1 on Windows 64bit.

Scraping Images. Should take less than a minute. Traceback (most recent call last): File "start.py", line 55, in <module> result = x.start_datascraper(session, username, app_token) File "D:\OFRip\modules\onlyfans.py", line 49, in start_datascraper response = media_scraper(session, *item[1]) File "D:\OFRip\modules\onlyfans.py", line 168, in media_scraper media_set = pool.starmap(scrape_array, product(offset_array, [session])) File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 268, in starmap return self._map_async(func, iterable, starmapstar, chunksize).get() File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 608, in get raise self._value File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 47, in starmapstar return list(itertools.starmap(args[0], args[1])) File "D:\OFRip\modules\onlyfans.py", line 148, in scrape_array dt = datetime.fromisoformat(media_api["postedAt"]).replace(tzinfo=None).strftime( AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

Script Isn't Accessing My OnlyFans Account

When loading up the script, the program refers to me by a different username than my own and I can't download anything from the accounts I'm subscribed to. I've double checked my config file and my app-token, auth_id, and auth_hash match the values shown in my browser.

wrong username

{DamnSon is not my username, it's an existing account that belongs to someone else}

Access denied

Hi now I got Phyton running but when I choose 0 for onlyfans it always says that the Access is denied. Checked Hash and all other parameters but its all right. Any Suggestions?
Thank you.

BUG: User Not Found

settings.json has been properly populated. After installing requirements and running python OnlyFans.py and then inputting a username, the return is User Not Found. I've tried running with username, profile link, userid from https://onlyfans.com/api2/v2/users/ and the preceding link with the userid. All return the same.

Settings.json filled, still not continuing

Hi,

I've filled out everything inside settings.json and when I run the script, it repeatedly asks for a username or profile link.

$ python OnlyFans.py
Input a username or profile link
https://onlyfans.com/yung_angel66
No users found
First time? Did you forget to edit your settings.json file?
Input a username or profile link
https://onlyfans.com/yung_angel66
No users found
First time? Did you forget to edit your settings.json file?
Input a username or profile link

Script is not running anymore

Hi, I'm getting this error

Traceback (most recent call last):
File "OnlyFans.py", line 142, in
user_id = link_check(input_link)
File "OnlyFans.py", line 38, in link_check
temp_user_id = user_list.select('a[data-user]')
AttributeError: 'NoneType' object has no attribute 'select'

Question

I'm probably doing something wrong on my end, but I'm using linux and this is the error I'm getting, ./StartDatascraper.py: line 2: syntax error near unexpected token (' ./StartDatascraper.py: line 2: path = os.path.dirname(os.path.realpath(file))'

When I run the script it gives me like a cross cursor, and then it takes a screenshot it appears, if that helps. I believed I followed the instructions correctly but perhaps I'm just inputting something wrong somewhere.

ValueError: Invalid isoformat string

Hey @DIGITALCRIMINAL, great job with the filename wildcards! There's an issue that on my end happened only when downloading videos: it downloads photos just fine but when going to videos the script throws this error:

Photos Finished
Traceback (most recent call last):
  File "/OnlyFans-master/OnlyFans.py", line 172, in <module>
    scrape_choice()
  File "/OnlyFans-master/OnlyFans.py", line 83, in scrape_choice
    media_scraper(video_api, location, j_directory, only_links)
  File "/OnlyFans-master/OnlyFans.py", line 126, in media_scraper
    dt = datetime.fromisoformat(media_api["postedAt"]).replace(tzinfo=None).strftime('%d-%m-%Y')
ValueError: Invalid isoformat string: '-001-11-30T00:00:00+00:00'

I managed to make it shut up and download the precious porn by nulling dt:
dt = "prettydolphins"

but I'd love to have the real date if it's possible!

[macOS] win32_setctime: This function is only available for the Windows platform.

Hey dude! Great work on the last commits, the scraper's performance has become amazing.
Your new dependency win32_setctime obviously works only on Windows, on macOS it halts the script.

  File "/usr/local/lib/python3.7/site-packages/win32_setctime.py", line 44, in setctime
    raise OSError("This function is only available for the Windows platform.")
OSError: This function is only available for the Windows platform.

I commented line 206 to make it work.

        # setctime(directory, timestamp)

Saying all videos are downloaded when they're not

When i run the script and let it run, it took around 1h 30mins to get 350~ videos from an onlyfans account then it finished and said it had grabbed them all. There is roughly 2600 videos on said only fans account. Shall i just re run the script with "overwrite_files" set to false and see if it grabs more videos and keep doing it till i all have been downloaded?

Can't install requirements

This is what I get when I try to install requirements in Linux:

Requirement already satisfied: requests in /media/sdw1/holonet81/.local/lib/python2.7/site-packages (from -r requirements.txt (line 1)) (2.22.0)
Collecting beautifulsoup4 (from -r requirements.txt (line 2))
Using cached https://files.pythonhosted.org/packages/f9/d9/183705a87492249b212d88eef740995f55076195bcf45ed59306c146e42d/beautifulsoup4-4.8.1-py2-none-any.whl
Requirement already satisfied: urllib3 in /media/sdw1/holonet81/.local/lib/python2.7/site-packages (from -r requirements.txt (line 3)) (1.25.6)
Collecting win32-setctime (from -r requirements.txt (line 4))
ERROR: Could not find a version that satisfies the requirement win32-setctime (from -r requirements.txt (line 4)) (from versions: none)
ERROR: No matching distribution found for win32-setctime (from -r requirements.txt (line 4))

What am I doing wrong here?

Add ability to quit/stop the downloads using keyboard

Currently there is no way to stop a download short of killing the terminal the application is running in. Could you please add an option to stop the downloads and quit the application using the keyboard? e.g "ctrl + c"

Cannot load links.json in excel anymore

I'm getting this error in excel when I tried to convert the json file to a table

Expression.Error: We cannot convert a value of type List to type Record.
Details:
Value=[List]
Type=[Type]

Auth Error

Not sure if I have editted the json file correct or not but I keep getting this error any help would be appreciated.
Traceback (most recent call last):
File "onlyfans.py", line 14, in
j_directory = json_data['directory']+"/Users/"
KeyError: 'directory'

UnicodeEncodeError: 'latin-1' codec can't encode character '\u20ac'

Traceback (most recent call last):
  File "OnlyFans.py", line 231, in <module>
    user_id = link_check()
  File "OnlyFans.py", line 58, in link_check
    r = session.get(link)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\adapters.py", line 449, in send
    timeout=timeout
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\urllib3\connectionpool.py", line 603, in urlopen
    chunked=chunked)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\urllib3\connectionpool.py", line 355, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\http\client.py", line 1244, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\http\client.py", line 1285, in _send_request
    self.putheader(hdr, value)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\http\client.py", line 1217, in putheader
    values[i] = one_value.encode('latin-1')
UnicodeEncodeError: 'latin-1' codec can't encode character '\u20ac' in position 31: ordinal not in range(256)

The issue I encounter when I try to run it as is

urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)>

Trying to scrape images but I'm getting this following error

`Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1317, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1414, in connect
server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 423, in wrap_socket
session=session
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 870, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 1139, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "onlyfans.py", line 170, in
scrape_choice()
File "onlyfans.py", line 86, in scrape_choice
media_scraper(image_api, location, j_directory, only_links)
File "onlyfans.py", line 145, in media_scraper
pool.starmap(download_media, product(media_set.items(), [directory]))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 47, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "onlyfans.py", line 154, in download_media
urlretrieve(link, directory)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1360, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1319, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)>`

Any fixes?

urllib.error.URLError: <urlopen error [WinError 10060]

Only having this issue when I try scrap profile with 1561 photos. I not using a vpn or proxy, have any ideas what else I can to fix this?

``Traceback (most recent call last):
File "C:\Python37\lib\urllib\request.py", line 1317, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "C:\Python37\lib\http\client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Python37\lib\http\client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Python37\lib\http\client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Python37\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Python37\lib\http\client.py", line 966, in send
self.connect()
File "C:\Python37\lib\http\client.py", line 1406, in connect
super().connect()
File "C:\Python37\lib\http\client.py", line 938, in connect
(self.host,self.port), self.timeout, self.source_address)
File "C:\Python37\lib\socket.py", line 727, in create_connection
raise err
File "C:\Python37\lib\socket.py", line 716, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "OnlyFans.py", line 170, in
scrape_choice()
File "OnlyFans.py", line 86, in scrape_choice
media_scraper(image_api, location, j_directory, only_links)
File "OnlyFans.py", line 145, in media_scraper
pool.starmap(download_media, product(media_set.items(), [directory]))
File "C:\Python37\lib\multiprocessing\pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\Python37\lib\multiprocessing\pool.py", line 657, in get
raise self._value
File "C:\Python37\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Python37\lib\multiprocessing\pool.py", line 47, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "OnlyFans.py", line 154, in download_media
urlretrieve(link, directory)
File "C:\Python37\lib\urllib\request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "C:\Python37\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Python37\lib\urllib\request.py", line 525, in open
response = self._open(req, data)
File "C:\Python37\lib\urllib\request.py", line 543, in _open
'_open', req)
File "C:\Python37\lib\urllib\request.py", line 503, in _call_chain
result = func(*args)
File "C:\Python37\lib\urllib\request.py", line 1360, in https_open
context=self._context, check_hostname=self._check_hostname)
File "C:\Python37\lib\urllib\request.py", line 1319, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>``

CSV/JSON files

Is it possible to remove any text in angle brackets and unicode text from the "text" column when scrapping to a json/csv file?

Currently, I open the json file in Word and use the wildcard search function to delete any text starting and ending with angle brackets, replace \n with a space etc. Then convert it to a spreadsheet and make the relevant changes before creating a batch file to rename the files with relevant titles.

Basically I have to do a lot before I end up with something like this

"ren "5d9ec0518452aaf05bd2d.mp4" "[2019-10-11] 194 - title.mp4""

Filename suggestion

We all know what media downloaded from OnlyFans looks like and it's not pretty… It would be great to add some templating like in youtube-dl in settings.json or even hardcoded, for example:

Users/{username}/{date} - {text} - {carousel-index}.{ext}

Which will become
Users/miamalkova/2019-08-08 - Pink hair... don't care!.mp4

I see in the API requests we have those:

postedAt: "2019-08-08T12:54:09+00:00",
text: "Pink hair... don't care!"

I think having the date in the filename would work wonder for consuming the downloaded media as it mirrors the profile order. The text could even not be in the filename but in a separate .json/.xml file for archival purposes.

AttributeError: 'NoneType' object has no attribute 'find'

When searching for someone, it will fail and output the following:

Traceback (most recent call last):
  File "OnlyFans.py", line 109, in <module>
    user_id = link_check(input_link)
  File "OnlyFans.py", line 38, in link_check
    temp_user_id = html.find("div", {"class": "b-users"}).find("a", attrs={"data-user", True})
AttributeError: 'NoneType' object has no attribute 'find'

OSError with {text} file_name_format

I'm running into the following error with a certain file when I use the {text} option in file_name_format:

OSError: [Errno 22] Invalid argument: 'X:\OnlyFans/Users/boobzillaxxx/Images/2018-02-23-Waistraining. HIIT cardio Get access to my unseen and exclusive content at onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com-33881666--upload-467305-1519404725819-JPEG-20180222-092806-349656419.jpg'

Not an issue

Do you think you will write a script to pull media from justfor.fans accounts?

Dates don't work an MacOS

win32-setctime doesn't run on MacOS so you have to remove it from the script, and you just end up with randomly named videos.

TypeError: argument of type 'NoneType' is not iterable

I'm encountering the following with iamsweette:

Traceback (most recent call last):
File "C:\Users\USER\Desktop\OnlyFans-master\Start Datascraper.py", line 20, in
result = start_datascraper(session, app_token, username)
File "C:\Users\USER\Desktop\OnlyFans-master\modules\onlyfans.py", line 47, in start_datascraper
response = media_scraper(session, *item[1])
File "C:\Users\USER\Desktop\OnlyFans-master\modules\onlyfans.py", line 163, in media_scraper
media_set = pool.starmap(scrape_array, product(offset_array, [session]))
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 657, in get
raise self._value
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 47, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "C:\Users\USER\Desktop\OnlyFans-master\modules\onlyfans.py", line 135, in scrape_array
if "ca2.convert" in file:
TypeError: argument of type 'NoneType' is not iterable

Error Scraping

I am getting the error below when scraping. Seems like it verified but cant get the details of the media? Same error which ever options is chosen everything,image, video and even with -l

Scrape: a = Everything | b = Images | c = Videos
Optional Arguments: -l = Only scrape links -()- Example: "a -l"
c
Traceback (most recent call last):
  File "OnlyFans.py", line 170, in <module>
    scrape_choice()
  File "OnlyFans.py", line 91, in scrape_choice
    media_scraper(video_api, location, j_directory, only_links)
  File "OnlyFans.py", line 124, in media_scraper
    dt = datetime.fromisoformat(media_api["postedAt"]).replace(tzinfo=None).strftime('%d-%m-%Y')
AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

Error in scrapping videos

Scrape: a = Everything | b = Images | c = Videos
Optional Arguments: -l = Only scrape links -()- Example: "a -l"
c -l
Traceback (most recent call last):
File "onlyfans.py", line 170, in
scrape_choice()
File "onlyfans.py", line 91, in scrape_choice
media_scraper(video_api, location, j_directory, only_links)
File "onlyfans.py", line 117, in media_scraper
for media in media_api["media"]:
TypeError: string indices must be integers

Getting Access denied

I have filled out the requirements inside settings.json, and I am still currently subscribed to the user in question. I'm also using the latest commit fa94513.

$ python OnlyFans.py
Input a username or profile link
https://onlyfans.com/yung_angel66
Access denied.
First time? Did you forget to edit your settings.json file?
Input a username or profile link

let me know how I can help you troubleshoot!

Cant get tokens

Hello I tried after your method but I cant find anything.
What Network Debugger do you use? (Tried Firefox and Chrome)
Is there anything special to know?
Thank you,

New header image for README.MD

I made this header image for the repo, stealing designs from here and there. If you like it you could put it on top of README.MD

Header

Date format

Anyway to change the date format to Y/M/D?

TimeoutError: [WinError 10060]

I still getting timeouts, It did output to the error log. Do you need info from error log?

Traceback (most recent call last):

  File "C:\Python37\lib\urllib\request.py", line 1317, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "C:\Python37\lib\http\client.py", line 1244, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "C:\Python37\lib\http\client.py", line 1290, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "C:\Python37\lib\http\client.py", line 1239, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "C:\Python37\lib\http\client.py", line 1026, in _send_output
    self.send(msg)
  File "C:\Python37\lib\http\client.py", line 966, in send
    self.connect()
  File "C:\Python37\lib\http\client.py", line 1406, in connect
    super().connect()
  File "C:\Python37\lib\http\client.py", line 938, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "C:\Python37\lib\socket.py", line 727, in create_connection
    raise err
  File "C:\Python37\lib\socket.py", line 716, in create_connection
    sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "OnlyFans.py", line 221, in <module>
    scrape_choice()
  File "OnlyFans.py", line 87, in scrape_choice
    media_scraper(image_api, location, j_directory, only_links)
  File "OnlyFans.py", line 170, in media_scraper
    pool.starmap(download_media, product(media_set.items(), [directory]))
  File "C:\Python37\lib\multiprocessing\pool.py", line 276, in starmap
    return self._map_async(func, iterable, starmapstar, chunksize).get()
  File "C:\Python37\lib\multiprocessing\pool.py", line 657, in get
    raise self._value
  File "C:\Python37\lib\multiprocessing\pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "C:\Python37\lib\multiprocessing\pool.py", line 47, in starmapstar
    return list(itertools.starmap(args[0], args[1]))
  File "OnlyFans.py", line 192, in download_media
    urlretrieve(link, directory)
  File "C:\Python37\lib\urllib\request.py", line 247, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "C:\Python37\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Python37\lib\urllib\request.py", line 525, in open
    response = self._open(req, data)
  File "C:\Python37\lib\urllib\request.py", line 543, in _open
    '_open', req)
  File "C:\Python37\lib\urllib\request.py", line 503, in _call_chain
    result = func(*args)
  File "C:\Python37\lib\urllib\request.py", line 1360, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "C:\Python37\lib\urllib\request.py", line 1319, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>

Ignore Existing Files

It looks like pre-existing media is overwritten instead of ignored. Is it possible to add that as an option when running the script?

Getting "Access Denied" errors

Tried multiple accounts and got the auth creds but still getting Access Denied, wonder if something changed in the login process.

AttributeError: 'NoneType' object has no attribute 'select'

Traceback (most recent call last):
File "OnlyFans.py", line 142, in
user_id = link_check(input_link)
File "OnlyFans.py", line 38, in link_check
temp_user_id = user_list.select('a[data-user]')
AttributeError: 'NoneType' object has no attribute 'select'

Not grabbing all OF posts

Hi, thanks for this script! I ran it earlier but it looks like it isn't scraping all the content for a given user. The profile shows 585 photos and 157 videos, but the script only downloaded 133 photos and 100 videos. Any idea what the issue could be?

object is not iterable

root@ip:~/OnlyFans# python3 StartDatascraper.py
Auth (V1) Attempt 1/10
Access denied.
Auth (V1) Attempt 2/10
Access denied.
Auth (V1) Attempt 3/10
Welcome u****
Some OnlyFans' video links have SLOW download (Blame OF). I suggest importing the metadata json content to a Download Manager like IDM or JDownloader, or you could be waiting for 1HR+ for 300 videos to be finished.
Names: 0 = All | 1 = someonesusername
1
Invalid Choice
Traceback (most recent call last):
  File "StartDatascraper.py", line 92, in <module>
    session[0], username, site_name, app_token)
  File "/root/OnlyFans/modules/onlyfans.py", line 51, in start_datascraper
    for item in array:
TypeError: 'bool' object is not iterable
root@ip:~/OnlyFans# python3 --version
Python 3.6.8
root@ip:~/OnlyFans# pip3 --version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
root@ip:~/OnlyFans#

Access Denied

Just updated to the latest version and updated config.json. I'm getting this:

Site: 0 = onlyfans | 1 = justforfans
0
Access denied.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.