GithubHelp home page GithubHelp logo

isarabjitdhiman / tweeterpy Goto Github PK

View Code? Open in Web Editor NEW
128.0 4.0 19.0 90 KB

TweeterPy is a python library to extract data from Twitter. TweeterPy API lets you scrape data from a user's profile like username, userid, bio, followers/followings list, profile media, tweets, etc.

License: MIT License

Python 100.00%
tweeter tweeter-api tweets-extraction twitter-api twitter-bot twitter-python twitter-scraper twitter-scraping twitter-client twitter-automation

tweeterpy's Introduction

TweeterPy

Discord

Overview

TweeterPy is a python library to extract data from Twitter. TweeterPy API lets you scrape data from a user's profile like username, userid, bio, followers/followings list, profile media, tweets, etc.

Note : Use it on Your Own Risk. Scraping with Residential proxies is advisable while extracting data at scale/in bulk. If possible, use multiple accounts to fetch data from Twitter. DON'T USE YOUR PERSONAL ACCOUNT FOR SCRAPING PURPOSES.

Installation

Install TweeterPy with pip

  pip install tweeterpy

Usage/Examples

python quickstart.py

OR

from twitter import TweeterPy

TweeterPy()

Example - Get User ID of a User.

from tweeterpy import TweeterPy

twitter = TweeterPy()

print(twitter.get_user_id('elonmusk'))

Documentation

Check out step by step guide.

Documentation

Configuration

Example - Config Usage

from tweeterpy import config

config.PROXY = {"http":"127.0.0.1","https":"127.0.0.1"}
config.TIMEOUT = 10
config.UPDATE_API = False

Check out configuration docs for the available settings.

Configurations

Features

  • Extracts Tweets
  • Extracts User's Followers
  • Extracts User's Followings
  • Extracts User's Profile Details
  • Extracts Twitter Profile Media and much more.

Authors

Feedback

If you have any feedback, please reach out to us at [email protected] or contact me on Social Media @iSarabjitDhiman

Support

For support, email [email protected]

tweeterpy's People

Contributors

dependabot[bot] avatar isarabjitdhiman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

tweeterpy's Issues

handle pagination using end cursor issue - Pagination Issue when total number of results specified

unable to continue fetching friends/follower using the get_friends. api call returns empty data after a few calls
example :
(this user has 137 followers)
using it with total = 10 parameter.
Fetching followers
Total users fetched : 10 : End cursor value : 1761772550753500306|1718657196322979790
Total users fetched : 20 : End cursor value : 1753300493287752709|1718657196322979738
Total users fetched : 30 : End cursor value : 0|1718657196322979700

Warning: No data returned from API call
Total users fetched : 30 End cursor value : 0|1718657196322979698

get_user_tweets count has no effect

Hello, I found two problems, which are:

  1. The data obtained by get_user_tweets is incomplete
    2.get_user_tweets count does not take effect
    Here is my code, take a look, is it the problem of the package or my code:
    tweets = await user.get_tweets('Tweets',10)
    for tweet in tweets:
    print(tweet.created_at)
    print(tweet.text)
    thank you!

get_tweet function with_tweet_replies can not return all the replies ?

Hi, thanks for this amazing package 👍

Just one little issue here, I'm looking for a reliable approach to check all the replies from one tweet id.
the replies count from twitter website like 3000-ish. but I'm only get 225 results returned.

I've tried pagination=True / False along with apply the end_cursor from last result returned but still. I'm also checked about rate limit which was false.

is it the limit of twitter graphql api limit or something ?

end_cursor value

Hi,

In get_user_tweets fucntion, I see I can pass the end_cursor value to have it start from the a certain tweet.

I've tried to use the cursor_endpoint value from the function's returned data. but it doesn't work.

Something looks like this:
'cursor_endpoint': 'DAABCgABF1OKqJc__-gKAAIXRIgW-doQAAgAAwAAAAIAAA'

or I tried
'entryId': 'tweet-1680771569491271681' or 1680771569491271681

None of these seems to work.

Can you please advise what value should I use?

Thanks,

RuntimeError('asyncio.run() cannot be called from a running event loop')

I want to use this for a Discord bot which is already async, but this error occurs. I was able to avoid this by using the current main branch, but I read you were planning on merging the async branch into main. It would be great if you could create a solution that doesn't utilize asyncio.run in request_util.py, or perhaps allows the user to specify if an event loop already exists, however that would be done.

[Enhance] Data class for each type

I'd be a good idea to have a library native data class, for example User or Twitt

That will help in not be drawn by the heavy payload the graphql endpoints has

For example I've been working on these two:

from dataclasses import dataclass


@dataclass
class TwitterTweet:
    rest_id: str
    user_id: str
    full_text: str
    created_at: str
    retweet_count: int
    favorite_count: int
    reply_count: int
    lang: str
    in_reply_to_status_id_str: str
    hashtags: list[str]
    user_mentions: list[str]
    urls: list[str]
    sentiment: str = ''

    @classmethod
    def from_payload(cls, payload):
        legacy_data = payload.get('legacy', {})
        entities_data = legacy_data.get('entities', {})
        return cls(rest_id=payload.get('rest_id', ''),
                   user_id=legacy_data.get('user_id_str', ''),
                   full_text=legacy_data.get('full_text', ''),
                   created_at=legacy_data.get('created_at', ''),
                   retweet_count=legacy_data.get('retweet_count', 0),
                   favorite_count=legacy_data.get('favorite_count', 0),
                   reply_count=legacy_data.get('reply_count', 0),
                   lang=legacy_data.get('lang', ''),
                   in_reply_to_status_id_str=legacy_data.get(
                       'in_reply_to_status_id_str', ''),
                   hashtags=[
                       tag['text']
                       for tag in entities_data.get('hashtags', [])
                   ],
                   user_mentions=[
                       mention['id_str']
                       for mention in entities_data.get('user_mentions', [])
                   ],
                   urls=[
                       url['expanded_url']
                       for url in entities_data.get('urls', [])
                   ])

and

from dataclasses import dataclass


@dataclass
class TwitterUser:
    id: str
    name: str
    screen_name: str
    statuses_count: int
    followers_count: int
    friends_count: int
    favourites_count: int
    listed_count: int
    default_profile: bool
    default_profile_image: bool
    location: str
    description: str
    description_has_url: bool
    description_url: str
    followers_to_following_ratio: float
    verified_type: str
    verified: bool
    is_blue_verified: bool
    has_graduated_access: bool
    can_dm: bool
    media_count: int
    has_custom_timelines: bool
    has_verification_info: bool
    possibly_sensitive: bool

    @classmethod
    def from_payload(cls, payload):
        legacy_data = payload.get('legacy', {})
        urls = [
            url['expanded_url'] for url in legacy_data.get('entities', {}).get(
                'description', {}).get('urls', [])
        ]
        followers_count = legacy_data.get('followers_count', 0)
        friends_count = legacy_data.get('friends_count', 0)

        return cls(
            id=payload.get('rest_id', ''),
            name=legacy_data.get('name', ''),
            screen_name=legacy_data.get('screen_name', ''),
            statuses_count=legacy_data.get('statuses_count', 0),
            followers_count=followers_count,
            friends_count=friends_count,
            favourites_count=legacy_data.get('favourites_count', 0),
            listed_count=legacy_data.get('listed_count', 0),
            default_profile=legacy_data.get('default_profile', False),
            default_profile_image=legacy_data.get('default_profile_image',
                                                  False),
            location=legacy_data.get('location', ''),
            description=legacy_data.get('description', ''),
            description_has_url=bool(urls),
            description_url=','.join(urls) if urls else '',
            followers_to_following_ratio=followers_count /
            friends_count if friends_count != 0 else 0,
            verified_type=legacy_data.get('verified_type', ''),
            verified=legacy_data.get('verified', False),
            is_blue_verified=payload.get('is_blue_verified', False),
            has_graduated_access=payload.get('has_graduated_access', False),
            can_dm=legacy_data.get('can_dm', False),
            media_count=legacy_data.get('media_count', 0),
            has_custom_timelines=legacy_data.get('has_custom_timelines',
                                                 False),
            has_verification_info=payload.get('verification_info', ''),
            possibly_sensitive=legacy_data.get('possibly_sensitive', False),
        )

Tweet URL

Hey,
Can you help me how can I get a tweet URL by using TweeterPy?

after 50k followers. throwing unrelevant exceptions.

i am trying with 1k tokens : https://www.mediafire.com/file/194pr1tulmh11mi/auth_tokens.txt/file
after collecting 50k followers. script is throwing this error :

'result'
2024-04-10 22:16:09,401 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:09,402 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:09,755 [←[1;31mERROR←[0m] :: Could not authenticate you

Traceback (most recent call last):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python312\Lib\site-packages\tweeterpy-1.0.17-py3.12.egg\tweeterpy\request_util.py", line 32, in make_request
    return util.check_for_errors(response.json())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python312\Lib\site-packages\tweeterpy-1.0.17-py3.12.egg\tweeterpy\util.py", line 102, in check_for_errors
    raise Exception(error_message)
Exception: Could not authenticate you
Could not authenticate you
2024-04-10 22:16:11,907 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:11,907 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:14,830 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:14,830 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:17,727 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:17,728 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:20,649 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:20,650 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:23,425 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:23,425 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:26,227 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:26,228 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:29,057 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:29,057 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:31,852 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:31,852 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:34,730 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:34,731 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:37,505 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:37,506 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:40,274 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:40,275 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:43,043 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:43,044 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:45,944 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:45,945 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:48,811 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:48,811 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:51,673 [←[0;32mINFO←[0m] :: User is authenticated.
2024-04-10 22:16:51,673 [←[0;32mINFO←[0m] :: User is authenticated.
'result'
2024-04-10 22:16:54,464 [←[0;32mINFO←[0m] :: User is authenticated.


this is my full code : 



from tweeterpy import TweeterPy
import sys
from tweeterpy.util import RateLimitError
import itertools
import urllib3
urllib3.disable_warnings()
#import concurrent.futures

list_tokens = open('twitter_tokens.txt', 'r', encoding='utf-8').read().splitlines()
accounts_pool = itertools.cycle(list_tokens)
    
def get_account():
    return next(accounts_pool).strip()

class Profile():
     
     def __init__(self,profile):
        self.profile = profile
    
     def get_followers(self,cursor=None):

        self.profile = self.profile.strip()
        has_more = True
        #cursor = None
        twitter = TweeterPy()
        twitter.generate_session(auth_token=get_account())
        while has_more:
            response = None
            try : 
                response = twitter.get_friends(self.profile,follower=True, end_cursor=cursor,pagination=False)
                with open(self.profile+'.txt', 'a',encoding='utf-8') as save_followers: 
                        for follower in response['data']: 
                            screen_name = follower['content']['itemContent']['user_results']['result']['legacy']['screen_name']
                            save_followers.write(screen_name+'\n')
                api_rate_limits = response.get('api_rate_limit')
                limit_exhausted = api_rate_limits.get('rate_limit_exhausted')
                if has_more:
                    cursor = response.get('cursor_endpoint')

                
                if limit_exhausted:
                    twitter.generate_session(auth_token=get_account())
                has_more = response.get('has_next_page')
                api_rate_limits = response.get('api_rate_limit')

            except Exception as e : 
                print(e)
                twitter.generate_session(auth_token=get_account())
            ## YOUR CUSTOM CODE HERE (DATA HANDLING, REQUEST DELAYS, SESSION SHUFFLING ETC.)
            ## time.sleep(random.uniform(7,10))


def create_and_launch_threads(profile):
    profile_client = Profile(profile)
    profile_client.get_followers()
    return


create_and_launch_threads('elonmusk')

Master Branch responsive_web_media_download_video_enabled

It was working fine last week and now it returns

twitter.get_user_tweets('elonmusk', total=10)
The following features cannot be null: responsive_web_media_download_video_enabled
{'data': [], 'cursor_endpoint': None, 'has_next_page': True}

Do you happen to know what is missing here?

Login does not work any more

Hi,

Looks like the login gateway has changed or blocked somehow.

twitter = TweeterPy()
twitter.login(account,password)

API Updated Successfully.
'content-type'3

File /opt/conda/lib/python3.9/site-packages/tweeterpy/login_util.py:33, in TaskHandler._get_flow_token(self)
     22 params = {'flow_name': 'login'}
     23 payload = {'input_flow_data': {
     24     'flow_context': {'debug_overrides': {}, 'start_location': {'location': 'manual_link'}, }, },
     25     'subtask_versions': {'action_list': 2, 'alert_dialog': 1, 'app_download_cta': 1, 'check_logged_in_account': 1,
   (...)
     31                          'settings_list': 7, 'show_code': 1, 'sign_up': 2, 'sign_up_review': 4, 'tweet_selection_urt': 1, 'update_users': 1,
     32                          'upload_media': 1, 'user_recommendations_list': 4, 'user_recommendations_urt': 1, 'wait_spinner': 3, 'web_modal': 1}}
---> 33 return make_request(Path.TASK_URL, method="POST", params=params, json=payload)

File /opt/conda/lib/python3.9/site-packages/tweeterpy/request_util.py:38, in make_request(url, session, method, max_retries, timeout, **kwargs)
     35 if api_limit_stats.get('rate_limit_exhausted'):
     36     print(
     37         f"\033[91m Rate Limit Exceeded:\033[0m {api_limit_stats}")
---> 38 raise error

File /opt/conda/lib/python3.9/site-packages/tweeterpy/request_util.py:22, in make_request(url, session, method, max_retries, timeout, **kwargs)
     20 api_limit_stats = util.check_api_rate_limits(response)
     21 soup = bs4.BeautifulSoup(response.content, "lxml")
---> 22 if "json" in response.headers["Content-Type"]:
     23     return util.check_for_errors(response.json())
     24 response_text = "\n".join(
     25     [line.strip() for line in soup.text.split("\n") if line.strip()])

File /opt/conda/lib/python3.9/site-packages/requests/structures.py:52, in CaseInsensitiveDict.__getitem__(self, key)
     51 def __getitem__(self, key):
---> 52     return self._store[key.lower()][1]

KeyError: 'content-type'

Thanks,

API Issues when attempting to use it?

Python Code:

from tweeterpy import TweeterPy
from tweeterpy import config
from tweeterpy.util import find_nested_key

config.UPDATE_API=True

def main():
    config.TIMEOUT = 5
    twitter = TweeterPy()


if __name__ == "__main__":
    main()
2023-11-05 18:17:33,072 [ERROR] :: Couldn't get the API Url.
'NoneType' object has no attribute 'group'
Traceback (most recent call last):
  File "C:\Users\___\AppData\Roaming\Python\Python311\site-packages\tweeterpy\api_util.py", line 59, in _get_api_file_url
    api_file_name = re.search(api_file_regex, page_source).group(1)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'group'
2023-11-05 18:17:33,073 [WARNING] :: 'NoneType' object has no attribute 'group' Couldn't get the latest API data.
2023-11-05 18:17:33,074 [WARNING] :: Couldn't find the API backup file.
2023-11-05 18:17:33,075 [WARNING] :: Couldn't restore API data from the backup file.
'NoneType' object has no attribute 'group'
2023-11-05 18:17:33,075 [ERROR] :: API Couldn't be Updated.
'NoneType' object has no attribute 'group'
Traceback (most recent call last):
  File "C:\Users\____\AppData\Roaming\Python\Python311\site-packages\tweeterpy\api_util.py", line 41, in __init__
    feature_switches, api_endpoints_data = self._load_api_data()
                                           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\___\AppData\Roaming\Python\Python311\site-packages\tweeterpy\api_util.py", line 34, in __init__
    api_file_url = self._get_api_file_url(page_source)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\____\AppData\Roaming\Python\Python311\site-packages\tweeterpy\api_util.py", line 59, in _get_api_file_url
    api_file_name = re.search(api_file_regex, page_source).group(1)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'group'
Traceback (most recent call last):
  File "c:\Users\____\Desktop\VS-Code\Otis\test.py", line 13, in <module>
    main()
  File "c:\Users\____\Desktop\VS-Code\Otis\test.py", line 9, in main
    twitter = TweeterPy()
              ^^^^^^^^^^^
  File "C:\Users\____\AppData\Roaming\Python\Python311\site-packages\tweeterpy\tweeterpy.py", line 31, in __init__
    ApiUpdater(update_api=config.UPDATE_API)
  File "C:\Users\___\AppData\Roaming\Python\Python311\site-packages\tweeterpy\api_util.py", line 41, in __init__
    feature_switches, api_endpoints_data = self._load_api_data()
                                           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\____\AppData\Roaming\Python\Python311\site-packages\tweeterpy\api_util.py", line 34, in __init__
    api_file_url = self._get_api_file_url(page_source)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\_____\AppData\Roaming\Python\Python311\site-packages\tweeterpy\api_util.py", line 59, in _get_api_file_url
    api_file_name = re.search(api_file_regex, page_source).group(1)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'group'

I tried using API for first time and it just crashed?

Too Many Requests

requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url

Unexpected Keyword Argument when calling twitter.get_user_tweets()

Get the unexpected keyword argument error when I use the following code.

from tweeterpy import TweeterPy
twitter = TweeterPy()
twitter.get_user_tweets(user_id, with_replies=True, end_cursor=None, total=2, from_date='19-07-2023', to_date='20-07-2023')

Error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-33-3ea1522e5413>](https://localhost:8080/#) in <cell line: 3>()
      1 # from_date = '19 July 2023'
      2 # to_date = '20 July 2023'
----> 3 tweets_by_user_id = twitter.get_user_tweets(user_id, with_replies=True, end_cursor=None, total=2, from_date='19-07-2023', to_date='20-07-2023')

[/usr/local/lib/python3.10/dist-packages/tweeterpy/tweeterpy.py](https://localhost:8080/#) in wrapper(self, *args, **kwargs)
    109             if not self.logged_in():
    110                 self.login()
--> 111             return original_function(self, *args, **kwargs)
    112         return wrapper
    113 

TypeError: TweeterPy.get_user_tweets() got an unexpected keyword argument 'from_date'

login error - unable to handle suspicious login

File "/usr/local/anaconda3/envs/eth_fi/lib/python3.10/site-packages/tweeterpy/tweeterpy.py", line 207, in login
TaskHandler().login(username, password)
File "/usr/local/anaconda3/envs/eth_fi/lib/python3.10/site-packages/tweeterpy/logging_util.py", line 45, in wrapper
raise error
File "/usr/local/anaconda3/envs/eth_fi/lib/python3.10/site-packages/tweeterpy/logging_util.py", line 43, in wrapper
returned_output = original_function(*args, **kwargs)
File "/usr/local/anaconda3/envs/eth_fi/lib/python3.10/site-packages/tweeterpy/login_util.py", line 126, in login
raise error
File "/usr/local/anaconda3/envs/eth_fi/lib/python3.10/site-packages/tweeterpy/login_util.py", line 103, in login
task_id = [task_id for task_id in task_flow_mapper.keys() if task_id in tasks][0]
IndexError: list index out of range

password is right ,thx for your reply

Expecting value: line 1 column 2 (char 1)

Hi,
Giving this a spin i'm not getting any data in either get_user_tweets, get_list_tweets, search, get_user_tweets, get_liked_tweets or get_user_timeline.
I like the pickled session resuming, good ideea.
The get_user_id returns correctly
Traced it back to the make_request inside request_util that raises the exception in the title, after the retries.
The response dictionary is as follows:
{'data': [], 'cursor_endpoint': None, 'has_next_page': True}

for the get_user_timeline these are the urls that get hit:
https://twitter.com/i/api/graphql/Uv42IObcWFhsgKHYEh5Brw/HomeTimeline
https://twitter.com/i/api/graphql/Uv42IObcWFhsgKHYEh5Brw/HomeTimeline
https://twitter.com/i/api/graphql/Uv42IObcWFhsgKHYEh5Brw/HomeTimeline
Expecting value: line 1 column 2 (char 1)

EDIT:
Just saw the closed issue.
I can confirm i'm not using proxies and I can use other twitter libraries. Python3.10.11

Error in demjson setup command: use_2to3 is invalid

Hi @iSarabjitDhiman

I got this error while installing TweeterPy.

Collecting demjson
  Downloading demjson-2.2.4.tar.gz (131 kB)
     ---------------------------------------- 131.5/131.5 kB 2.6 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [1 lines of output]
      error in demjson setup command: use_2to3 is invalid.
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.

historical data

I can see the following code returns the latest 100 tweets, it is possible to get more history?

from tweeterpy import TweeterPy

twitter = TweeterPy()

user = twitter.get_user_id("elonmusk")

tweets = twitter.get_user_tweets(user)

print(tweets)

unable to configure handler 'file'

Hey @iSarabjitDhiman, since you move so swiftly, here's a new one for you.
This might not affect everybody and might even be related to my system, I can't be sure.
You changed a bit your logger system in the latest versions as this wasn't an issue before - at least in 0.0.10 the implementation was a lot simpler.

Here's part of the trace:

File ".../python3.10/site-packages/tweeterpy/util.py", line 10, in <module>
  logging.config.dictConfig(config.LOGGING_CONFIG)
File "/usr/lib/python3.10/logging/config.py", line 811, in dictConfig
  dictConfigClass(config).configure()
File "/usr/lib/python3.10/logging/config.py", line 572, in configure
  raise ValueError('Unable to configure handler '
ValueError: Unable to configure handler 'file'

I tried disabling the logging but it changes nothing, also tried specifying the file as None.

generate_session now fails

This is not a guest_session

Traceback (most recent call last):
  File ".../python3.10/site-packages/tweeterpy/request_util.py", line 22, in make_request
    if "json" in response.headers["Content-Type"]:
  File ".../python3.10/site-packages/requests/structures.py", line 52, in __getitem__
    return self._store[key.lower()][1]
KeyError: 'content-type'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ".../test_twitterPy.py", line 20, in <module>
    twitter_api = TweeterPy()
  File ".../python3.10/site-packages/tweeterpy/tweeterpy.py", line 19, in __init__
    self.generate_session()
  File ".../python3.10/site-packages/tweeterpy/tweeterpy.py", line 128, in generate_session
    make_request(Path.BASE_URL, session=self.session)
  File ".../python3.10/site-packages/tweeterpy/request_util.py", line 35, in make_request
    if api_limit_stats.get('rate_limit_exhausted'):
AttributeError: 'NoneType' object has no attribute 'get'

UPDATE:
It has recovered, but it's a bad sign that something is about to change.

Exception ignored. RuntimeError: Event loop is closed - problem with the async await branch

while trying to scrape using the branch async_await i got the result that i have wanted but i got this error

API Updated Successfully.
Successfully Logged In..
["Opération anti-terroriste: Les perquisitions effectuées aux domiciles de certains mis en cause ont permis la saisie d'armes blanches, d'ouvrages et documents à contenu extrémiste et de guides de fabrication d'engins explosifs.\n#Maroc https://t.co/fxoAGZ5KuB", "🔴 Coup de filet anti-terroriste #BCIJ / #DGST/ #BNPJ: Une cinquantaine de personnes ont été interpellées, aujourd'hui, une vingtaine placées en garde à vue, suite à des opérations sécuritaires simultanées menées dans plusieurs villes du #Maroc.\n@DGSN_MAROC https://t.co/ZI40SYkDez", 'بعد توقيف المشتبه فيهم نفذت المصالح الأمنية وعناصر مراقبة التراب الوطني عمليات التفتيش بمنازل الموقوفين، والتي أدى بعضها إلى اكتشاف أسلحة بيضاء عبارة عن سيوف وفؤوس وهراوات ذات استخدام مشبوه. بالإضافة إلى كتب ومخطوطات تحتوي على خطب مع تحرض على "الجهاد"، وتتضمن أيضا وثائق تبين… https://t.co/3OewK8ULmW', '🔴 نفذت مصالح الامن  يومه الاربعاء عملية أمنية في صفوف عدد من المشتبهين في أوساط التيار المتطرف.\n\nوأطاحت هذه العملية الأمنية بحوالي 50 شخصا تمت إحالتهم على التحقيق تحت إشراف النيابة العامة المختصة.\n\n#المغرب https://t.co/EO8EMrywJM', "#Fête_du_Trône #Maroc\n\nÉnergies renouvelables: En quelques années, le #Maroc s'est positionné en leader mondial à la faveur d'une politique volontariste portée au plus haut sommet de l'Etat, des stratégies ciblées et des mégaprojets structurants. https://t.co/YeT8gI8cfE", "L'ancien gouverneur de Bank Al-Maghrib, Mohamed Sekkat est décédé, mercredi à Rabat, à l'âge de 81 ans.\n#Maroc https://t.co/7KaQoxDVer", '#المغرب #عيد_العرش \n\nتعزيز و تطوير البنيات الصحية في مختلف مناطق المملكة يعكس حرص صاحب الجلالة الملك محمد السادس على توفير خدمات طبية لمختلف فئات المجتمع https://t.co/cNY40Dqg5s', '#المغرب #عيد_العرش \n\nتعميم الحماية الاجتماعية ورش مجتمعي غير مسبوق. هذا المشروع الذي ترأس حفل إطلاق تنزيله صاحب الجلالة الملك محمد السادس في 14 أبريل 2020، يأتي ليثمن الإنجازات الهامة التي راكمها المغرب في هذا المجال https://t.co/MJ0KYJVxE3', '#عيد_العرش #المغرب\n\n استثمارات ضخمة، مشاريع مهيكلة ورؤية طموحة...الطاقات المتجددة، أساس استراتيجية المغرب الطاقية. ورش يحظى بالعناية السامية لصاحب الجلالة الملك محمد السادس. https://t.co/l6EMQTLZPl', '📌 📺 الطاقات المتجددة، تحدي القرن الحادي والعشرين.. ورش ملكي كبير أسس ل«\xa0ثورة طاقية» بالمملكة. \n\nبمناسبة عيد العرش، القناة الثانية تحكي لكم هذه الثورة، التي منحت المغرب الريادة في مجال الطاقات النظيفة، وذلك عبر وثائقي معزز بشهادات نادرة وصور أخاذة.\n\nموعدكم  مع «\xa0 المغرب، المملكة… https://t.co/PgS9Um1lap', "📺 Les énergies renouvelables, un enjeu du 21ème siècle. Au #Maroc, c'est un chantier de Règne. La Révolution énergétique du Royaume est en marche.\n\nA l'occasion de la #Fête_du_Trône, 2M vous raconte ce Maroc, leader des énergies propres, à travers un récit inédit fait de… https://t.co/80wwiU81YF", "Le nombre de clients du Groupe #Maroc Telecom a atteint près de 75 millions à fin juin 2023, en légère baisse de 0,5% par rapport à la même période de l'année précédente. https://t.co/7khuFJ55Lv"]
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x00000267CB2F3820>
Traceback (most recent call last):
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\asyncio\proactor_events.py", line 116, in __del__
    self.close()
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\asyncio\proactor_events.py", line 108, in close
    self._loop.call_soon(self._call_connection_lost, None)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 751, in call_soon
    self._check_closed()
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 515, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

here is my code

from tweeterpy import TweeterPy
from tweeterpy import util

twitter = TweeterPy()

twitter.login("***","****")
# OR generate session manually with auth-token (if if you're unable to log in)
# twitter.generate_session(auth_token="auth_token_here")
userid=twitter.get_user_id("2MInteractive")
# set with_replies to True if you are also interested in tweet replies, if you only need tweets, leave it as it is. Feel free to see total number of tweets you want to get, but default it gets all if set to None.
user_tweets = twitter.get_user_media(userid,from_date="2023-07-25",to_date="2023-07-26")
usernames = util.find_nested_key(user_tweets,"full_text")


print(usernames)

TypeError: Object of type timedelta is not JSON serializable

i got error: TypeError: Object of type timedelta is not JSON serializable
i fix problem in line 126 utils.py:

api_limit_stats = {"total_limit": api_requests_limit, "remaining_requests_count": remaining_api_requests, "resets_after": remaining_time, "reset_after_datetime_object": str(remaining_time_datetime_object), "rate_limit_exhausted": limit_exhausted}

please update to lib

Enhancements: List tweets and data models

Hey there, @iSarabjitDhiman,
Congratulations for picking this subject in these turbulent times and also for the dynamic handling of the qraphql endpoints which is something I didn't see in other implementations.
I'd like to suggest implementing the ListTweets endpoint if you can find the right documentation for that (i've found some conflicting info).
Also, something useful for people looking for authenticated scrape is using data models such as those implemented by SNScrape.twitter module. This would make switching to TweeterPy easier for a lot of people.
Thanks!

Error on login with auth_token

Might be related to the issue solved in your last commit

API Updated Successfully.
Could not login account with auth_token, Twitter returned {'message': 'The following features cannot be null: creator_subscriptions_tweet_preview_api_enabled', 'extensions': {'name': 'BadRequestError', 'source': 'Client', 'code': 336, 'kind': 'Validation', 'tracing': {'trace_id': '5e01127c0bb131c2'}}, 'code': 336, 'kind': 'Validation', 'name': 'BadRequestError', 'source': 'Client', 'tracing': {'trace_id': '5e01127c0bb131c2'}}

I see that the key is present in util.py though

Rate Limit

Hi,

Looks like the rate limit argument is not working any more?

twitter = TweeterPy()
twitter.load_session(session_file_path=path)
twitter.get_user_id('',return_rate_limit=True)

TypeError: get_user_id() got an unexpected keyword argument 'return_rate_limit'

twitter.get_friends('',follower=True,return_rate_limit=True)
get_friends() got an unexpected keyword argument 'return_rate_limit'

Looking for get_user_tweets rate limit too.

Thanks,

Is there a way to set the logging level?

It feels like there is just too much log information.

e.g.

2023-10-22 20:29:56,401 [ERROR] :: Couldn't get the API file Url.
'NoneType' object has no attribute 'group'
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/tweeterpy/api_util.py", line 68, in _get_api_file_url
api_file_name = re.search(api_file_regex, page_source).group(1)
AttributeError: 'NoneType' object has no attribute 'group'
2023-10-22 20:29:59,104 [INFO] :: API Updated Successfully.
2023-10-22 20:29:59,105 [INFO] :: User is authenticated.
2023-10-22 20:29:59,254 [INFO] :: User is authenticated.

Is there a way to not display these?

For example, all logs, important logs, no logs.

Thanks,

can't get twitter list from search method

i use tweeterpy request twitter search

from tweeterpy import TweeterPy
from tweeterpy import config

config.PROXY = {"http": "127.0.0.1:7891", "https": "127.0.0.1:7891"}
config.TIMEOUT = 10

twitter = TweeterPy()
data = twitter.search(search_query="hello world")
print(data)

response is none list

'content-type'

'content-type'
{'data': [], 'cursor_endpoint': None, 'has_next_page': True}

twitter.get_list_tweets gives an error

twitter = TweeterPy()
twitter.load_session("login.pkl")
list_id = "xxxxxxxxxx"
twitter.get_list_tweets(list_id, end_cursor=None, total=50)
This code suddenly stopped working.

Has the List endpoint(= TWEETS_LIST_ENDPOINT) changed?

twitter.get_user_id('elonmusk')
twitter.get_user_media('elonmusk', end_cursor=None, total=2)
works fine.

thanks,

date period push

hello @iSarabjitDhiman would you please push the since until in the main branch please? i need to use it without using the async await branch

Got code 88, Rate limit exceeded just by twitter = TweeterPy()

hey pal, it's me again.

I've encountered this since last tuesday, when I try to initialize the crawler script which runs perfect for like whole month.

from tweeterpy import TweeterPy
twitter = TweeterPy()

it says

2024-03-19 02:41:31,223 [ERROR] :: Couldn't generate a new session.
'guest_token'
Traceback (most recent call last):
  File "/home/yitinglin/metacrm-twitter-crawler/tweeterpy/tweeterpy.py", line 159, in generate_session
    guest_token = make_request(
KeyError: 'guest_token'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/yitinglin/metacrm-twitter-crawler/tweeterpy/tweeterpy.py", line 29, in __init__
    self.generate_session()
  File "/home/yitinglin/metacrm-twitter-crawler/tweeterpy/tweeterpy.py", line 159, in generate_session
    guest_token = make_request(
KeyError: 'guest_token'

when I dig into the source code and print out the result of make_request to the guest token, I found that I got

{'code': 88, 'message': 'Rate limit exceeded.'}

I'm wondering if you have encountered this issue before, on the guest token request.

TypeError: Client.__init__() got an unexpected keyword argument 'follow_redirects'

The code gives the following error.
code:
twitter = TweeterPy()
error:

`File ~/miniconda3/envs/tweeterpy_env/lib/python3.11/site-packages/tweeterpy/tweeterpy.py:18, in TweeterPy.__init__(self)
     17 def __init__(self):
---> 18     self.generate_session()
     19     # update api endpoints
     20     ApiUpdater()

File ~/miniconda3/envs/tweeterpy_env/lib/python3.11/site-packages/tweeterpy/tweeterpy.py:97, in TweeterPy.generate_session(self, auth_token)
     95 ssl_verify = False if proxies else True
     96 timeout = config.TIMEOUT or 10
---> 97 self.session = httpx.Client(
     98     follow_redirects=True, timeout=timeout, proxies=proxies, verify=ssl_verify)
     99 self.session.headers.update(util.generate_headers())
    100 make_request(Path.BASE_URL, session=self.session)

TypeError: Client.__init__() got an unexpected keyword argument 'follow_redirects'`

Tweet Dates

Maybe I am doing something wrong, but the tweet dates seem to be all over the place for the guest tokens.

Is there a method to grab the latest tweets?

Thank you for your work.

from tweeterpy import TweeterPy
twitter = TweeterPy()
twitter.load_session("Twitter Saved Sessions/session.pkl")
tweets = twitter.get_user_tweets('AltCryptoGems')

for data in tweets['data']:
    print(data['content']['itemContent']['tweet_results']['result']['legacy']['created_at'])

API Updated Successfully.
Wed Nov 23 10:59:33 +0000 2022
Fri Oct 21 17:01:00 +0000 2022
Sun Oct 02 14:28:20 +0000 2022
Sun Apr 09 17:00:02 +0000 2023
Thu Aug 18 12:30:00 +0000 2022
Mon Aug 15 17:31:36 +0000 2022
Thu Sep 01 16:00:01 +0000 2022
Wed May 03 20:15:00 +0000 2023
Mon Jul 18 08:00:01 +0000 2022
Mon May 17 18:22:17 +0000 2021
Sun Jul 23 10:58:46 +0000 2023
Mon May 22 16:00:01 +0000 2023
Mon May 31 07:24:39 +0000 2021
Mon Apr 11 18:42:00 +0000 2022
Tue Feb 07 19:30:00 +0000 2023
Sun Aug 06 07:04:00 +0000 2023
Mon Jul 11 09:43:00 +0000 2022
Tue Sep 20 17:00:07 +0000 2022
Sat Feb 25 13:00:00 +0000 2023
Fri Nov 04 18:30:17 +0000 2022
Mon Nov 14 18:30:15 +0000 2022
Mon Oct 17 17:30:05 +0000 2022
Fri Oct 14 17:30:05 +0000 2022
Wed Mar 15 17:01:00 +0000 2023
Fri Nov 11 18:30:14 +0000 2022
Sun Mar 05 19:03:33 +0000 2023
Tue Jul 04 16:59:00 +0000 2023
Fri Nov 18 18:30:15 +0000 2022
Tue May 11 07:58:24 +0000 2021
Fri Sep 30 15:00:05 +0000 2022
Mon Oct 10 17:30:15 +0000 2022
Thu Sep 15 16:50:49 +0000 2022
Wed Jul 20 15:00:02 +0000 2022
Mon Nov 07 18:30:17 +0000 2022
Mon Oct 31 18:30:10 +0000 2022
Tue Nov 08 21:40:04 +0000 2022
Tue May 17 10:09:31 +0000 2022
Thu Oct 27 13:26:38 +0000 2022
Mon Sep 26 16:55:03 +0000 2022
Wed Jan 05 12:33:18 +0000 2022
Thu Jul 14 12:00:02 +0000 2022
Tue Jul 05 10:00:01 +0000 2022
Sun Jun 26 19:33:16 +0000 2022
Tue Apr 04 18:04:10 +0000 2023
Mon Jun 13 09:01:00 +0000 2022
Thu Jul 28 16:00:01 +0000 2022
Mon Jun 20 10:00:04 +0000 2022
Sat Aug 06 08:58:09 +0000 2022
Fri Dec 16 18:45:04 +0000 2022
Sat Jun 25 17:14:00 +0000 2022
Wed Oct 05 10:40:04 +0000 2022
Wed Sep 15 13:29:23 +0000 2021
Tue Aug 02 16:00:02 +0000 2022
Thu Sep 29 09:05:04 +0000 2022
Fri Dec 02 18:45:03 +0000 2022
Mon Feb 20 18:01:24 +0000 2023
Sun Oct 09 11:30:04 +0000 2022
Wed Sep 28 16:00:02 +0000 2022
Fri Oct 28 16:10:05 +0000 2022
Thu Jan 26 09:01:30 +0000 2023
Fri Aug 12 12:00:02 +0000 2022
Mon Feb 06 22:48:38 +0000 2023
Fri Jul 01 08:56:00 +0000 2022
Wed Oct 26 20:40:08 +0000 2022
Wed Oct 26 14:05:04 +0000 2022
Mon Apr 26 11:43:50 +0000 2021
Mon Jul 25 19:36:00 +0000 2022
Sun Oct 09 06:10:02 +0000 2022
Sat Oct 08 06:10:03 +0000 2022
Wed Oct 26 17:00:06 +0000 2022
Tue Sep 27 10:40:04 +0000 2022
Fri Jul 08 09:57:57 +0000 2022
Sat Nov 20 09:29:13 +0000 2021
Thu Jun 02 09:25:00 +0000 2022
Wed Oct 26 21:10:05 +0000 2022
Tue Jan 04 17:13:49 +0000 2022
Wed Jun 07 19:00:01 +0000 2023
Wed Sep 28 20:40:08 +0000 2022
Fri Sep 30 12:15:06 +0000 2022
Sun Oct 09 18:35:02 +0000 2022
Tue May 11 18:31:31 +0000 2021
Tue Aug 01 07:01:00 +0000 2023
Tue Dec 06 18:00:02 +0000 2022
Wed Oct 26 15:00:12 +0000 2022
Sat Oct 08 10:40:02 +0000 2022
Mon Jun 06 12:01:00 +0000 2022
Mon Aug 30 18:57:00 +0000 2021
Thu Dec 15 18:59:17 +0000 2022
Wed Sep 28 21:10:05 +0000 2022
Sun Jan 08 20:53:49 +0000 2023
Sat Nov 13 09:04:02 +0000 2021
Mon Oct 03 15:00:08 +0000 2022
Fri Dec 23 15:00:02 +0000 2022
Wed Oct 26 18:35:06 +0000 2022
Thu Jan 13 11:45:00 +0000 2022
Tue Nov 02 20:22:32 +0000 2021
Sat Dec 17 12:22:02 +0000 2022
Sun May 22 18:55:00 +0000 2022
Mon Dec 26 18:45:03 +0000 2022
Mon Aug 01 08:13:44 +0000 2022

Rate limit exception raising

I've been playing around the library. Is a great work @iSarabjitDhiman thanks for the effort on maintain it.

I have a possible suggestion (as I mentioned in other issue, I'm not the best at python)
I'm receiving the error for rate limiter, and I'd love to catch this error, I've been reading the code and it seems we are rising it, but I'm not able to capture from my end.

Is there a way we can add it? Or knowing if the rate limiter was reached?

Error in demjson setup command: use_2to3 is invalid.

Hi, congrats on the constant improvements to this great module.
I pip install often in new venvs so I could observe that that version is not the latest (quite obsolete at this point).
What's the best approach, other than manually updating to the latest commits?
Thanks

Issues with Parallelism - Issue with Error Handling

I tried everything from asyncio, multiprocessing, threading. This library only works when its run the way in the docs.
The following snippet shows my attempt:

from tweeterpy import TweeterPy
import concurrent.futures

fp = open("acc_to_scrape.txt")
lines = fp.readlines()
fp.close()

thisTwitterAPI = TweeterPy()
thisTwitterAPI.login(OMIT)

def fetchFollowers(line):
    thisTwitterResponse = thisTwitterAPI.get_friends(line, follower=True)
    with open(f"./acc_followers/{line}.txt", "w") as fp:
        fp.write(thisTwitterResponse)

with concurrent.futures.ProcessPoolExecutor() as executor:
    executor.map(fetchFollowers, lines)

I keep getting: Unable to fetch data error.
Traceback:

Traceback (most recent call last):
  File "/home/OMIT/OMIT/.venv/lib/python3.10/site-packages/tweeterpy/request_util.py", line 32, in make_request
    return util.check_for_errors(response.json())
  File "/home/OMIT/OMIT/.venv/lib/python3.10/site-packages/tweeterpy/util.py", line 105, in check_for_errors
    raise Exception("Couldn't fetch data.")
Exception: Couldn't fetch data.
2024-02-08 02:08:37,180 [ERROR] :: Couldn't fetch data.

TypeError: 'type' object is not subscriptable (Error when using older version of python. i.e. less than 3.9)

There's an error when running the app on versions of python less than 3.9.

I get this error on Python 3.8.10

The error originates from the tweeterpy module:

TypeError: 'type' object is not subscriptable

This error is common when trying to use a subscript notation (like list[dict]) in a Python version that is older than 3.9. The script or module you're using seems to be written for Python 3.9 or later, while your environment seems to be running an older version of Python.

Solutions:
Upgrade Python:
Upgrading your Python version to 3.9 or later would be the most straightforward solution.

sudo apt-get install python3.9

Modify the Code:

If upgrading Python is not an option, you would need to modify the tweeterpy module's code to replace the subscript notation with the typing module's generics:

Change this:

description_urls: list[dict] = field(default_factory=list)

To this:

from typing import List, Dict

description_urls: List[Dict] = field(default_factory=list)

These changes would need to be made wherever subscript notation is used within the tweeterpy module.

could declare 3.9 is required, or present alternative method ¯_(ツ)_/¯

The following features cannot be null: verified_phone_label_enabled, responsive_web_graphql_timeline_navigation_enabled, responsive_web_media_download_video_enabled, responsive_web_graphql_exclude_directive_enabled, responsive_web_graphql_skip_user_profile_image_extensions_enabled

i am using async_await branch i got the his message ;"The following features cannot be null: verified_phone_label_enabled, responsive_web_graphql_timeline_navigation_enabled, responsive_web_media_download_video_enabled, responsive_web_graphql_exclude_directive_enabled, responsive_web_graphql_skip_user_profile_image_extensions_enabled" and return and empty list

Task index - list index out of range

Python version: 3.10

Code:
from tweeterpy import TweeterPy
from tweeterpy import config

def main():
config.TIMEOUT = 5
config.PROXY = {'http': 'xx.xxx.xx.xx', 'https': 'xx.xxx.xx.xx'}
twitter = TweeterPy()
print(twitter.get_user_id('elonmusk'))
print(twitter.get_user_info('elonmusk'))
# print(twitter.get_user_data('elonmusk'))
# print(twitter.get_user_tweets('elonmusk', total=200))
# print(twitter.get_user_media('elonmusk', total=100))

if name == "main":
main()

Error:
Traceback (most recent call last):
File "/home/machine/Desktop/twscrape/id.py", line 17, in
main()
File "/home/machine/Desktop/twscrape/id.py", line 9, in main
print(twitter.get_user_id('elonmusk'))
File "/usr/local/lib/python3.10/site-packages/tweeterpy/tweeterpy.py", line 110, in wrapper
self.login()
File "/usr/local/lib/python3.10/site-packages/tweeterpy/tweeterpy.py", line 189, in login
TaskHandler().login(username, password)
File "/usr/local/lib/python3.10/site-packages/tweeterpy/login_util.py", line 91, in login
task_id = [task_id for task_id in task_flow_mapper.keys() if task_id in tasks][0]
IndexError: list index out of range

Question:

I've just found a proxy and trying your example code above. The twitter username and password are applied correctly. What am I doing wrong and getting this error?

Couldn't generate guest token

Hey, how are you?

I've been off for a while, working on my final engineering project.
I'm having this issue on my EC2 instance:

2023-10-28 19:35:52,979 [ERROR] :: Couldn't generate a new session.
'guest_token'
Traceback (most recent call last):
  File "/home/ec2-user/.local/lib/python3.9/site-packages/tweeterpy/tweeterpy.py", line 152, in generate_session
    guest_token = make_request(
KeyError: 'guest_token'
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/ec2-user/flask_api/phishbuster-cron/src/api/api.py", line 19, in <module>
    twitter_collector = TwitterDataCollector()
  File "/home/ec2-user/flask_api/phishbuster-cron/src/lib_tweeterpy/scrap_tweeterpy.py", line 12, in __init__
    self.twitter = TweeterPy()
  File "/home/ec2-user/.local/lib/python3.9/site-packages/tweeterpy/tweeterpy.py", line 29, in __init__
    self.generate_session()
  File "/home/ec2-user/.local/lib/python3.9/site-packages/tweeterpy/tweeterpy.py", line 152, in generate_session
    guest_token = make_request(
KeyError: 'guest_token'

I really dunno why it is happening and running out of options tbh.
Could you help me @iSarabjitDhiman ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.