GithubHelp home page GithubHelp logo

getstream / stream-chat-python Goto Github PK

View Code? Open in Web Editor NEW
64.0 29.0 20.0 473 KB

Stream Chat official Python API Client

Home Page: https://getstream.io/chat/

License: Other

Python 99.22% Makefile 0.38% JavaScript 0.40%
chat python chat-api rest

stream-chat-python's Introduction

Official Python SDK for Stream Chat

build PyPI version PyPI - Python Version Checked with mypy

Official Python API client for Stream Chat, a service for building chat applications.
Explore the docs »

Code Samples · Report Bug · Request Feature


💡 Major update in v4.0 <

The returned response objects are instances of StreamResponse class. It inherits from dict, so it's fully backward compatible. Additionally, it provides other benefits such as rate limit information (resp.rate_limit()), response headers (resp.headers()) or status code (resp.status_code()).


📝 About Stream

You can sign up for a Stream account at our Get Started page.

You can use this library to access chat API endpoints server-side.

For the client-side integrations (web and mobile) have a look at the JavaScript, iOS and Android SDK libraries (docs).

⚙️ Installation

$ pip install stream-chat

✨ Getting started

💡 The library is almost 100% typed. Feel free to enable mypy for our library. We will introduce more improvements in the future in this area.

from stream_chat import StreamChat

chat = StreamChat(api_key="STREAM_KEY", api_secret="STREAM_SECRET")

# add a user
chat.upsert_user({"id": "chuck", "name": "Chuck"})

# create a channel about kung-fu
channel = chat.channel("messaging", "kung-fu")
channel.create("chuck")

# add a first message to the channel
channel.send_message({"text": "AMA about kung-fu"}, "chuck")

# we also expose some response metadata through a custom dictionary
resp = chat.deactivate_user("bruce_lee")

print(type(resp)) # <class 'stream_chat.types.stream_response.StreamResponse'>
print(resp["user"]["id"]) # bruce_lee

rate_limit = resp.rate_limit()
print(f"{rate_limit.limit} / {rate_limit.remaining} / {rate_limit.reset}") # 60 / 59 /2022-01-06 12:35:00+00:00

headers = resp.headers()
print(headers) # { 'Content-Encoding': 'gzip', 'Content-Length': '33', ... }

status_code = resp.status_code()
print(status_code) # 200

Async

import asyncio
from stream_chat import StreamChatAsync


async def main():
    async with StreamChatAsync(api_key="STREAM_KEY", api_secret="STREAM_SECRET") as chat:
        # add a user
        await chat.upsert_user({"id": "chuck", "name": "Chuck"})

        # create a channel about kung-fu
        channel = chat.channel("messaging", "kung-fu")
        await channel.create("chuck")

        # add a first message to the channel
        await channel.send_message({"text": "AMA about kung-fu"}, "chuck")

        # we also expose some response metadata through a custom dictionary
        resp = await chat.deactivate_user("bruce_lee")
        print(type(resp)) # <class 'stream_chat.types.stream_response.StreamResponse'>
        print(resp["user"]["id"]) # bruce_lee

        rate_limit = resp.rate_limit()
        print(f"{rate_limit.limit} / {rate_limit.remaining} / {rate_limit.reset}") # 60 / 59 / 2022-01-06 12:35:00+00:00

        headers = resp.headers()
        print(headers) # { 'Content-Encoding': 'gzip', 'Content-Length': '33', ... }

        status_code = resp.status_code()
        print(status_code) # 200


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    try:
        loop.run_until_complete(main())
    finally:
        loop.run_until_complete(loop.shutdown_asyncgens())
        loop.close()

✍️ Contributing

We welcome code changes that improve this library or fix a problem, please make sure to follow all best practices and add tests if applicable before submitting a Pull Request on Github. We are very happy to merge your code in the official repository. Make sure to sign our Contributor License Agreement (CLA) first. See our license file for more details.

Head over to CONTRIBUTING.md for some development tips.

🧑‍💻 We are hiring!

We've recently closed a $38 million Series B funding round and we keep actively growing. Our APIs are used by more than a billion end-users, and you'll have a chance to make a huge impact on the product within a team of the strongest engineers all over the world.

Check out our current openings and apply via Stream's website.

stream-chat-python's People

Contributors

amirrpp avatar anatolyrugalev avatar bogdan-d avatar ferhatelmas avatar ffenix113 avatar github-actions[bot] avatar guerinoni avatar gumuz avatar jimmypettersson85 avatar kanat avatar knyghty avatar marco-ulge avatar miagilepner avatar mircea-cosbuc avatar nekuromento avatar omelched avatar pangeran-bottor avatar peterdeme avatar pterk avatar ruggi avatar shreeharivaasishta avatar tbarbugli avatar thesyncim avatar totalimmersion avatar viktorapo808 avatar vishalnarkhede avatar yaziine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stream-chat-python's Issues

createToken(userID) generates an invalid Token

When creating a new user token, the response contains extra invalid information.

chat_client = stream_chat.StreamChat(api_key=GET_STREAM_KEY, api_secret=GET_STREAM_SECRET) token = chat_client.create_token(request.user.id)

Expected response:

"e*******************************************************************************-**********************************YA"

actual response

"b'e*******************************************************************************-**********************************YA'"

Use f strings

since support is 3.6 above, f strings can be leveraged everywhere.

gz#7901

Any way to create channels with different configs without creating a new channel type?

I know how to create a new channel type and then create a new channel using the new channel type in order to customize the settings of that channel.

I've had a request from a product owner to be able to make an administrative toggle to turn on and off replies in a given channel. Should I solve this by creating two different channel types and making the each channel use the one with replies enabled or the one with replies disabled when instantiating the front end? Is there a better way?

Migration from 3.2.1 to 4.2.1

Is the python-sdk backwards compatible or would I have to change all the code? I'm currently using 3.2.1 but want to shift to 4.2.1

No way to get response headers because _parse_response strips them

https://getstream.io/chat/docs/rate_limits/ recommends using the X-RateLimit-Reset to find out "when the current limit will reset (unix timestamp)." Currently this is impossible with stream-chat-python because _parse_response strips the headers:
https://github.com/GetStream/stream-chat-python/blob/1.1.0/stream_chat/client.py#L43-L50

  1. Please provide a way to retrieve the headers that the docs specify we should respond to
  2. Please update the docs to show how to use stream-chat-python to honor rate limits

Use of f-strings breaks in Python < 3.6

The Readme states that this library supports Python 3.4+, but f-strings are used in this library, and they were only introduced in Python 3.6. When I try to import from client.py in Python 3.5, I get a syntax error.

Support for fetching events via rest api

Hey,
we are wondering how to make an API 100% reliable.

Scenario:

  • We receive web hooks from GetStream
  • System is down for some time (or there was some bug released and it is not reading incoming data)
  • We get it back up after some time

How can we ensure that we've got all the events?
In Mailgun for example, we also listen for events via web hooks but additional every 1h we fetch all the events using RestAPI (there is an endpoint to fetch all events since specified time).

This way, we can be sure that even if web hook was not delivered, we will fetch all missed events from the system via recurring task.

Proposed solution:

  • Allow to fetch events via Rest API with filter option as time of event creation

StreamChatAsync does not clean up resources according to httpaio

Summary:
httpaio generates error messages due to an unclosed connection.
Having taken a quick look at stream_chat/async_chat/client.py, calling session.close() on aexit does not satisfy httpaio in my scenario.

Environment:
AWS Lambda
AWS Powertools v2.34.2 via AsyncBatchProcessor
stream-chat v4.12.1
Python v3.11

Relevant error messages:

"2024-03-25T22:26:43.353Z","[ERROR]	2024-03-25T22:26:43.353Z	2f7ab88d-c683-46f7-b6dc-616ab9f3241f	Unclosed client session"
"2024-03-25T22:26:43.353Z","connector: <aiohttp.connector.TCPConnector object at 0x7fdf32456f50>"
"2024-03-25T22:26:43.353Z","connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7fdf1a99a970>, 331.795931377)]']"
"2024-03-25T22:26:43.353Z","[ERROR]	2024-03-25T22:26:43.353Z	2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Unclosed connector"

Verbose log:

Date,Message
"2024-03-25T22:26:43.407Z","REPORT RequestId: 2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Duration: 499.26 ms	Billed Duration: 500 ms	Memory Size: 1024 MB	Max Memory Used: 395 MB"
"2024-03-25T22:26:43.407Z","END RequestId: 2f7ab88c-b683-46f7-b6dc-616ab9f3241f"
"2024-03-25T22:26:43.404Z","[WARNING]	2024-03-25T22:26:43.404Z	2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Executing <Task finished name='Task-6' coro=<AsyncBatchProcessor._async_process_record() done, defined at /var/task/aws_lambda_powertools/utilities/batch/base.py:634> result=('success', ...) created at /var/lang/lib/python3.11/asyncio/tasks.py:680> took 0.475 seconds"
"2024-03-25T22:26:43.353Z","super().__init__("
"2024-03-25T22:26:43.353Z","File ""/var/task/aiohttp/connector.py"", line 776, in __init__"
"2024-03-25T22:26:43.353Z","connector=aiohttp.TCPConnector(keepalive_timeout=59.0),"
"2024-03-25T22:26:43.353Z","File ""/var/task/stream_chat/async_chat/client.py"", line 68, in __init__"
"2024-03-25T22:26:43.353Z","client = StreamChatAsync("
"2024-03-25T22:26:43.353Z","File ""/var/task/helpers/streamchat_client.py"", line 18, in __init__"
"2024-03-25T22:26:43.353Z","stream_client = StreamChatClient(completionRequested.conversation.channelId)"
"2024-03-25T22:26:43.353Z","File ""/var/task/categorizer.py"", line 56, in async_record_handler"
"2024-03-25T22:26:43.353Z","result = await self.handler(record=data)"
"2024-03-25T22:26:43.353Z","File ""/var/task/aws_lambda_powertools/utilities/batch/base.py"", line 649, in _async_process_record"
"2024-03-25T22:26:43.353Z","self._context.run(self._callback, *self._args)"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/events.py"", line 84, in _run"
"2024-03-25T22:26:43.353Z","handle._run()"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/base_events.py"", line 1928, in _run_once"
"2024-03-25T22:26:43.353Z","self._run_once()"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/base_events.py"", line 608, in run_forever"
"2024-03-25T22:26:43.353Z","self.run_forever()"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/base_events.py"", line 641, in run_until_complete"
"2024-03-25T22:26:43.353Z","return loop.run_until_complete(task_instance)"
"2024-03-25T22:26:43.353Z","File ""/var/task/aws_lambda_powertools/utilities/batch/base.py"", line 126, in async_process"
"2024-03-25T22:26:43.353Z","processor.async_process()"
"2024-03-25T22:26:43.353Z","File ""/var/task/aws_lambda_powertools/utilities/batch/decorators.py"", line 251, in async_process_partial_response"
"2024-03-25T22:26:43.353Z","return async_process_partial_response("
"2024-03-25T22:26:43.353Z","File ""/var/task/categorizer.py"", line 103, in handler"
"2024-03-25T22:26:43.353Z","response = request_handler(event, lambda_context)"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/site-packages/awslambdaric/bootstrap.py"", line 188, in handle_event_request"
"2024-03-25T22:26:43.353Z","handle_event_request("
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/site-packages/awslambdaric/bootstrap.py"", line 499, in run"
"2024-03-25T22:26:43.353Z","bootstrap.run(app_root, handler, lambda_runtime_api_addr)"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/site-packages/awslambdaric/__main__.py"", line 21, in main"
"2024-03-25T22:26:43.353Z","awslambdaricmain.main([os.environ[""LAMBDA_TASK_ROOT""], os.environ[""_HANDLER""]])"
"2024-03-25T22:26:43.353Z","File ""/var/runtime/bootstrap.py"", line 60, in main"
"2024-03-25T22:26:43.353Z","main()"
"2024-03-25T22:26:43.353Z","File ""/var/runtime/bootstrap.py"", line 63, in <module>"
"2024-03-25T22:26:43.353Z","source_traceback: Object created at (most recent call last):"
"2024-03-25T22:26:43.353Z","connector: <aiohttp.connector.TCPConnector object at 0x7fdf32456f50>"
"2024-03-25T22:26:43.353Z","connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7fdf1a99a970>, 331.795931377)]']"
"2024-03-25T22:26:43.353Z","[ERROR]	2024-03-25T22:26:43.353Z	2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Unclosed connector"
"2024-03-25T22:26:43.353Z","self.session = aiohttp.ClientSession("
"2024-03-25T22:26:43.353Z","File ""/var/task/stream_chat/async_chat/client.py"", line 66, in __init__"
"2024-03-25T22:26:43.353Z","client = StreamChatAsync("
"2024-03-25T22:26:43.353Z","File ""/var/task/helpers/streamchat_client.py"", line 18, in __init__"
"2024-03-25T22:26:43.353Z","stream_client = StreamChatClient(completionRequested.conversation.channelId)"
"2024-03-25T22:26:43.353Z","File ""/var/task/categorizer.py"", line 56, in async_record_handler"
"2024-03-25T22:26:43.353Z","result = await self.handler(record=data)"
"2024-03-25T22:26:43.353Z","File ""/var/task/aws_lambda_powertools/utilities/batch/base.py"", line 649, in _async_process_record"
"2024-03-25T22:26:43.353Z","self._context.run(self._callback, *self._args)"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/events.py"", line 84, in _run"
"2024-03-25T22:26:43.353Z","handle._run()"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/base_events.py"", line 1928, in _run_once"
"2024-03-25T22:26:43.353Z","self._run_once()"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/base_events.py"", line 608, in run_forever"
"2024-03-25T22:26:43.353Z","self.run_forever()"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/asyncio/base_events.py"", line 641, in run_until_complete"
"2024-03-25T22:26:43.353Z","return loop.run_until_complete(task_instance)"
"2024-03-25T22:26:43.353Z","File ""/var/task/aws_lambda_powertools/utilities/batch/base.py"", line 126, in async_process"
"2024-03-25T22:26:43.353Z","processor.async_process()"
"2024-03-25T22:26:43.353Z","File ""/var/task/aws_lambda_powertools/utilities/batch/decorators.py"", line 251, in async_process_partial_response"
"2024-03-25T22:26:43.353Z","return async_process_partial_response("
"2024-03-25T22:26:43.353Z","File ""/var/task/categorizer.py"", line 103, in handler"
"2024-03-25T22:26:43.353Z","response = request_handler(event, lambda_context)"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/site-packages/awslambdaric/bootstrap.py"", line 188, in handle_event_request"
"2024-03-25T22:26:43.353Z","handle_event_request("
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/site-packages/awslambdaric/bootstrap.py"", line 499, in run"
"2024-03-25T22:26:43.353Z","bootstrap.run(app_root, handler, lambda_runtime_api_addr)"
"2024-03-25T22:26:43.353Z","File ""/var/lang/lib/python3.11/site-packages/awslambdaric/__main__.py"", line 21, in main"
"2024-03-25T22:26:43.353Z","awslambdaricmain.main([os.environ[""LAMBDA_TASK_ROOT""], os.environ[""_HANDLER""]])"
"2024-03-25T22:26:43.353Z","File ""/var/runtime/bootstrap.py"", line 60, in main"
"2024-03-25T22:26:43.353Z","main()"
"2024-03-25T22:26:43.353Z","File ""/var/runtime/bootstrap.py"", line 63, in <module>"
"2024-03-25T22:26:43.353Z","source_traceback: Object created at (most recent call last):"
"2024-03-25T22:26:43.353Z","client_session: <aiohttp.client.ClientSession object at 0x7fdf1c022190>"
"2024-03-25T22:26:43.353Z","[ERROR]	2024-03-25T22:26:43.353Z	2f7ab88d-c683-46f7-b6dc-616ab9f3241f	Unclosed client session"
"2024-03-25T22:26:42.930Z","[WARNING]	2024-03-25T22:26:42.930Z	2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Could not send typing start event to stream chat - StreamChat error code 16: SendEvent failed with error: ""Can't find channel with id"""""
"2024-03-25T22:26:42.911Z","[INFO]	2024-03-25T22:26:42.911Z	2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Sending typing start event to stream chat channel"
"2024-03-25T22:26:42.908Z","[INFO]	2024-03-25T22:26:42.908Z	2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Received 1 records."
"2024-03-25T22:26:42.908Z","[INFO]	2024-03-25T22:26:42.908Z	2f7ab88c-b683-46f7-b6dc-616ab9f3241f	Starting handler..."
"2024-03-25T22:26:42.907Z","START RequestId: 2f7ab88c-b683-46f7-b6dc-616ab9f3241f Version: 75"

Changelog?

I want to upgrade from 0.4.3. to 0.5.0, but in order to do that, I need to see a list of impacted areas so I can pass it on to our QA team. I see a changelog file in the root of the repo, but it is blank. Will you start maintaining the changelog file?

I tried downloading the source to do a diff, but I noticed that 0.4.3 is not tagged in the repository. Can you share which commit you used for 0.4.3?

Create or Update Chat Channel

I am able to send message if the channel exists but.. how to check if the channel doesn't exist and create one in that case?

Issue when querying users

Hi,

Did anything change in the API? I suddenly started receiving something that is not parsable as int in the header x-ratelimit-limit and thus, causing a parsing issue.

Code to reproduce the error:

from stream_chat import StreamChat

chat = StreamChat(api_key="My API key", api_secret="My API secret")

chat.query_users({})

conda?

I'm tring to install using conda but the package is not found in anaconda.org. Any tips on how to get it to work?

1.3.0 breaks verify_webhook (python 3)

Since e51c4fa I receive an error in verify_webhook:

TypeError: Unicode-objects must be encoded before hashing

This seems to be related to the removal of six.b.

I'm using python 3.7, but I have no idea what I'm supposed to be doing. This seems to be a breaking change (in a minor release). And the documentation doesn't mention anything about encoding the data prior to calling verify_webhook:
https://getstream.io/chat/docs/webhooks/?language=python#verify-events-via-x-signature-and-x-api-key

What am I missing here?

$nin query operator works as $in.

Hi!

Looks like right now $nin operator works as $in. Here is the example with python SDK:

channels = chat_client.query_channels({'is_direct': True, 'admin': {'$in': ['3']}},)

Where channels are:

{'channels': [
  {'channel': {'id': '40',
    'type': 'messaging',
    'cid': 'messaging:40',
    ...
    'admin': ['4', '3'],
    'group': '37'}
  },
  {'channel': {'id': '63',
    'type': 'messaging',
    'cid': 'messaging:63',
    ...
    'admin': ['3', '14'],
    'group': '56'},
  },
  {'channel': {'id': '72',
    'type': 'messaging',
    'cid': 'messaging:72',
    ...
    'admin': ['3', '17'],
    'group': '64',
    },
  },
  {'channel': {'id': '53',
    'type': 'messaging',
    'cid': 'messaging:53',
    ...
    'admin': ['18', '3'],
    'group': '46'
    },
  },
  {'channel': {'id': '9',
    'type': 'messaging',
    'cid': 'messaging:9',
    ...
    'admin': ['3', '9'],
    'archived': ['3'],
    'group': '9'
    }
  }],
 'duration': '6.86ms',
}

Now doing the $nin query trying to filter out all archived channels for user '3':

channels = chat_client.query_channels({'is_direct': True, 'admin': {'$in': ['3']}, 'archived': {'$nin': ['3']}}) 

I expect here 4 channels:

{'channels': [{'channel': {'id': '9',
    'type': 'messaging',
    'cid': 'messaging:9',
    ...
    'admin': ['3', '9'],
    'archived': ['3'],
    'group': '9'}}],
 'duration': '8.08ms'
}

In fact, we have here only the channel which has user 3 in archive array.

Creating a channel with a new user doesn't set a created_at time on the new user.

If I create a new channel for a non-registered user id, then that user is created without a proper created_at value.

> client.channel('livestream', 'test').create('new-user')

{'channel': {'id': 'test',
  'type': 'livestream',
  'cid': 'livestream:test',
  'created_at': '2020-08-19T22:33:42.163578Z',
  'updated_at': '2020-08-19T22:33:42.163579Z',
  'created_by': {'id': 'new-user,
   'role': 'user',
   'banned': False,
   'online': False},
  'frozen': False,
  'config': {'created_at': '2020-08-06T15:35:24.475131Z',
   'updated_at': '2020-08-06T15:35:24.47511Z',
   'name': 'livestream',
   'typing_events': False,
   'read_events': False,
   'connect_events': False,
   'search': False,
   'reactions': False,
   'replies': False,
   'mutes': False,
   'uploads': False,
   'url_enrichment': False,
   'message_retention': 'infinite',
   'max_message_length': 500,
   'automod': 'simple',
   'automod_behavior': 'flag',
   'commands': None}},
 'messages': [],
 'members': [],
 'membership': None,
 'duration': '12.67ms'}


> client.update_users([{'id': 'new-user, 'name': 'Bob'}])

'users': {'new-user': {'id': 'new-user',
   'role': 'user',
   'created_at': '0001-01-01T00:00:00Z',
   'updated_at': '2020-08-19T22:34:23.22199Z',
   'banned': False,
   'online': False,
   'name': 'Bob'}},
 'duration': '6.85ms'}

Note, the channel response doesn't even have a created_at value, and the update_users response has a nonsense created_at value.

If I do the update_user call first and then create the channel, then the user will have a proper created_at value in both the user record of the update_users call and in the created_by field of the channel response.

This causes an issue loading channels in the iOS client library, since it gets a decoding error on the missing created_at key.

gz#5431

Upgraded from stream-chat 4.1.0 to 4.2.1, breaking change on creating

channel = Channel( get_stream_client, channel_type=json_obj['type'], channel_id=json_obj['id'] if 'id' in json_obj.keys() else None, custom_data=json_obj['custom_data'] )
channel = channel.query()
Error message:
'Channel' object has no attribute 'query'

Is there something I am missing?

Thank you 👍

Bug:

When I use the client.search function for some channels ids which doesn't exist, instead of returning an empty list it gives me the following error:

Traceback (most recent call last):
  File "senti_abuse_analysis.py", line 177, in <module>
    response = client.search({"id": {"$in": channel_ids}}, {"updated_at":{"$gt": previous_max_date}}, limit=400)
  File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/stream_chat/client.py", line 358, in search
    return self.get("search", params={"payload": json.dumps(params)})
  File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/stream_chat/client.py", line 90, in get
    return self._make_request(self.session.get, relative_url, params, None)
  File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/stream_chat/client.py", line 77, in _make_request
    return self._parse_response(response)
  File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/stream_chat/client.py", line 46, in _parse_response
    raise StreamAPIException(response.text, response.status_code)
stream_chat.base.exceptions.StreamAPIException: StreamChat error code 4: Search failed with error: "There are no searchable channels""

Send image fails for bot detection

an example url: https://api.twilio.com/2010-04-01/Accounts/AC3e136e1a00279f4dadcb10a9f1a1e8a3/Messages/MM547edf6f4846a30231c3033fa20f8419/Media/ME498020f8fe0b0ba2ff83ac99e4782e02

redirected url: https://s3-external-1.amazonaws.com/media.twiliocdn.com/AC3e136e1a00279f4dadcb10a9f1a1e8a3/6cc4e8fb61288133c70db0b083114791 works

Attachment Type - Custom Value

We are using both Python and Flutter SDKs. In Flutter SDK, while uploading files, we can set a custom value ('file' in our case) for attachments type field. However, when we try to do the same thing using Python SDK, type field is converted to Mime-Type like application/json even if we set it to 'file' on send_file function.

channel = streamchat.channel(channel_type, channel_id)
channel.send_file(file_url, file_name, {"id": user_id}, "file") # type is set as file

Becomes

Screenshot 2021-04-09 at 14 12 19

Is it possible to set a custom value for the type field?

gz#10992

`update_channel_type` not working as expected

It's possible to invoke the commands that update_channel_type uses directly; self.put("channeltypes/{}".format(channel_type), **settings) works fine. But calling update_channel_type with any arguments to update the channel results in errors:

image
image
image

In addition, it appears that one is unable to remove specific commands from the channel type:
image

I'm using stream-chat==0.4.3.

Add a new way to query all users

Problem explanation

Zendesk ticket: 43044
query_channels possible infinite loop by always returning results over 1000
eg Asking for offset: 1000000 will return users even when only 2000 users may exist in the system.

https://support.getstream.io/hc/en-us/articles/4414446580119-Querying-for-users-channels-past-the-offset-1000-limit-Chat

I would suggest that the library implement an easy way to query all users with offsets greater than 1000, maybe a python iterable.

Steps to reproduce

I'm well aware this code is not very clean but it does show the issue. Since query channels will always return items in the array you don't know when to stop.

channels = []
offset = 0
limit = max_limit_to_fetch_channels
channels_fetched_count = limit
while channels_fetched_count > 0 or channels_fetched_count == limit:
    channels_fetched = query_channels(
        filter_conditions=......,
        sort={"last_message_at": -1},
        message_limit=1,
        channel_limit=limit,
        offset=offset,
    )
    channels.extend(channels_fetched)
    offset += limit
    channels_fetched_count = len(channels_fetched)

Environment info

Library version: stream-chat==4.8.0

Any way to "set the user" without calling "update users" endpoint?

The docs specify that to use the API, one needs to set the user in the client: https://getstream.io/chat/docs/init_and_users/?language=js#setting-the-user

Note how we are waiting for the setUser API call to be completed before moving forward. You should always make sure to have the user set before making any more calls. All SDKs make this very easy and wait or queue requests until then.

I don't see any method that lets one set the user without hitting the Update Users API endpoint, which suffers from draconian rate limiting at 200ms / request.

Is there a way to call setUser in the Python API at all, like the other APIs (a python example is NOT in the docs)?

Failed to add members to channel

Recently we found that we cannot add more than 100 users to the channel. No exceptions has been raised. We tried adding them one by one instead of batching by 99 users, but still they are not added to that channel. Any suggestions?

gz#5696

Support `Message` parameter for `channel.add_members`

I hope to create welcoming message for new joined user using channel.add_members.

I found that other client SDK like NodeJS has this awesome feature. link

However, python SDK doesn't support this feature.
https://github.com/GetStream/stream-chat-python/blob/master/stream_chat/channel.py#L155

I checked that GetStream Backend Server support this feature by code below.

ch.client.post(
    ch.url, data={'add_members': ['159815'], "message": {'text': 'welcome', 'user_id': '16', 'custom_attribute': 'working?'}}
)

I hope to message parameter to be exposed in official interface of channel.add_members.

Thanks.

add throttling option to campaigns API?

Per your docs, the Campaigns api sends messages at a rate of about 60000 per minute. (https://getstream.io/chat/docs/python/campaign_api/?language=python). Would it be possible to add a throttling option so that these messages get sent more slowly, at a rate specified by the user?

The reason is that we have a server that handles the webhooks for new message events. Normally that server handles at most a few hundred events per minute. If that number were to jump to 60000 / minute without any ramp-up time, the server would undoubtedly crash.

client.on() method

For example, the Dart language has an on() method that allows you to work with events. This is very convenient, because I need not to be tied to a channel. Is there any alternative in package for python?

Dart example:

client.on().listen((event) => {
	if (event.totalUnreadCount != null) {
		print('unread messages count is now: ${event.totalUnreadCount}');
	}
 
	if (event.unreadChannels !== null) {
		print('unread channels count is now: ${event.unreadChannels}');
	}
});

Sorting message results from a channel query

I'm attempting to query the messages of a channel, but it is unclear from the documentation how to sort those message results in the python SDK.

Here's some hopefully illustrative code:

channel_instance = Channel(stream_client, 'type', 'id')
response = channel_instance.query(messages={'limit': 5})

I would expect that I could supply 'sort': 'created_at' to the dict supplied to messages. For example:

# This doesn't work
channel_instance = Channel(stream_client, 'type', 'id')
response = channel_instance.query(messages={'limit': 5, 'sort': 'created_at'})

Is there support for sorting the paginated messages returned from channel queries? If so, is there support for ascending and descending sorting?

Is there any way to find out if a user has a particular role?

What I'd like to do is query the role of a particular user on a particular channel, and add them as a member or moderator if they don't already have a role. I've tried a number of different things, and can't seem to get any query that only returns the member status for the supplied user ids.

channel.query(members={"limit": 1}, user_id='192_168_33_11-HBXUser_id_5x')
>>> returns a member id that is NOT 192_168_33_11-HBXUser_id_5x

channel.query(members={"limit": 1}, filter_conditions={ "members": {"$in": "192_168_33_11-HBXUser_id_5x"} })
>>> returns a member id that is NOT 192_168_33_11-HBXUser_id_5x

channel.query(**{"id": {"$eq": channel.id}, "members": {"$in": ['192_168_33_11-HBXUser_id_33x'], "limit": 1} , "message_limit": 0})
>>> returns a member that is NOT 192_168_33_11-HBXUser_id_33x

The documentation is extremely unclear about this, especially as it pertains to python. In addition, it seems like there's no test coverage for channel.query, only for client.query_channels.

I'm using v1.0.1.

1.) is this currently possible? If so, how?
2.) Will you add test coverage for channel.query?
3.) Are there any limits to the number of members in a given channel you can query?

add disable_webhooks option to campaigns API?

My company receives and processes getstream webhooks for a variety of reasons, and normally the system works well. However, when we use the campaigns feature to send a message to our entire user base (~100K users), the server that processes webhooks gets hit with a massive traffic spike due to the fact that so many messages are sent within a short period of time. This causes a bunch of load-related problems -- database connection issues, timeouts or rate limit errors on other third-party services, etc.

It would be great if we could disable webhooks for campaigns, possibly by adding a disable_webhooks option to the api call that triggers the campaign.

Thanks much :)

Response code of 500 for "no more pages" seems wrong

I'm using version 0.5.0. When querying messages or members on a channel, you can supply offset and limit parameters to implement pagination. If you happen to only have 10 users signed up on your channel, and you use "offset" of 50 and limit of 50, you get back HTTP status code 500, which seems incorrect, as it's the 2nd page with a scroll of 50. It seems like status code 404 or status code 200 returning an empty array would be more idiomatically correct than "internal server error."

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.