randsleadershipslack / destalinator Goto Github PK
View Code? Open in Web Editor NEWCode for managing Cleanup of Stale Channels
License: Apache License 2.0
Code for managing Cleanup of Stale Channels
License: Apache License 2.0
I'm getting an encoding error in python3 when taking the app out of debug mode and running warner.py
Traceback (most recent call last):
File "/Users/matthew.wenger/destalinator/warner.py", line 18, in <module>
Warner().warn(force_warn=force_warn)
File "/Users/matthew.wenger/destalinator/warner.py", line 11, in warn
self.ds.warn_all(self.config.warn_threshold, force_warn)
File "/Users/matthew.wenger/destalinator/destalinator.py", line 229, in warn_all
if self.warn(channel, days, force_warn):
File "/Users/matthew.wenger/destalinator/destalinator.py", line 214, in warn
self.post_marked_up_message(channel_name, self.warning_text, message_type='channel_warning')
File "/Users/matthew.wenger/destalinator/destalinator.py", line 108, in post_marked_up_message
self.slacker.post_message(channel_name, self.add_slack_channel_markup(message), **kwargs)
File "/Users/matthew.wenger/destalinator/slacker.py", line 325, in post_message
post_data['attachments'] = json.dumps([{'fallback': message_type}], encoding='utf-8')
File "/usr/local/Cellar/[email protected]/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
TypeError: __init__() got an unexpected keyword argument 'encoding'
RuntimeError: Attempted to get channel info for sanfrancisco, but return was {u'error': u'ratelimited', u'ok': False}
File "scheduler.py", line 36, in destalinate_job
warner.Warner().warn()
File "/app/warner.py", line 11, in warn
self.ds.warn_all(self.config.warn_threshold, force_warn)
File "/app/destalinator.py", line 234, in warn_all
if self.stale(channel, days):
File "/app/destalinator.py", line 118, in stale
if self.slacker.channel_has_only_restricted_members(channel_name):
File "/app/slacker.py", line 169, in channel_has_only_restricted_members
mids = set(self.get_channel_members_ids(channel_name))
File "/app/slacker.py", line 161, in get_channel_members_ids
return self.get_channel_info(channel_name)['members']
File "/app/slacker.py", line 192, in get_channel_info
raise RuntimeError(m)
After being warned and no subsequent activity for thirty days, we saw some channels get warned again, instead of archived.
When collecting activity in the last N days, we do this twice per channel, once for the warner and once for the archiver. Perhaps more importantly, we collect everything, then filter it and see if it matches certain conditions. If we invert control here, and pass stopping conditions into the "activity in the last N days" method, we should get a substantial performance increase to the point where it might be okay to do it twice per channel. Busy channels should stop sooner rather than later... right now it spends more work on busy channels.
The ๐พ icon has two aliases:
:save :
:floppy_disk :
(spaces deliberately added to avoid rendering emoji)
If the first person to wish to save a message uses the floppy form, others will also click on this and the message will never be picked up and put into #zmeta_flagged.
You can see this effect here:
https://rands-leadership.slack.com/archives/help-and-advice/p1471971278002815
I'm seeing an issue where a channel that has already been 30-day-warned gets 30-day-warned the next time the scheduler runs. Although it's annoying it does seem to be an effective way to get people to leave a channel ๐
This is mostly a note to myself -- as I've prepped destalinator for use in my company's Slack team, I think there's room to mention warning.txt in the readme as something that people might want to customize.
Hi, I'm building something similar and stumbled upon destalinator on the internet. I'm currently having some issues with getting last_read
value in channels.info
's json response for channels I'm not a member of. Wonder if you have some experience dealing with that. Any insights you could share? Thanks!
We've standardized on tests.fixtures
and tests.mocks
to clean up test data and create reliable states. We just haven't yet updated test_destalinator
to use that data instead of its own ad-hoc data.
Using a config like ignore_channel_patterns
through the environment variable DESTALINATOR_IGNORE_CHANNEL_PATTERNS=^auto-
causes everything to be ignored. The string pattern is not casted into a list properly and causes unexpected behavior. If there's a better env var syntax for this to work out of the box we should update docs, otherwise we'll need to code in a solution or not support env vars for list properties.
I've been pretty happy with where @TheConnMan took the configuration module. This would make a neat micro library. Probably parameterize the DESTALINATOR_
prefix...
I've been happy with the logger setup here and the WithLogger mixin, and hope to use this in other python projects.
Scheduler still cares about local timezone on container/server and will fail if not set, even if DESTALINATOR_RUN_ONCE is set. All scheduling should be ignored (and code not run) if DESTALINATOR_RUN_ONCE is set.
This is relevant when running Destalinator as a Kubernetes ChronJob that handles the scheduling.
Hi,
I changed the archive_threshold to 37 in the configuration.yaml. Afterwards I rebuilt the destalinator Docker image, run it and entered the container. In the container I run the warner.py, which send a message to the concerned channels in the Slack.
The message in #general was:
Hey, heads up -- the following channels are stale and will be archived if no one participates in them over the next 30 days
The message in the concerned channels ended with
.If we don't hear from anyone in the next 30 days, this channel will be archived.
I was expecting to see 7 days instead of 30 days in those messages.
Extract of the con figuration.yaml:
# Days of silence before we warn a channel it's going to be archived
warn_threshold: 30
# Days of silence before we auto-archive a channels
archive_threshold: 37
#116 Introduced this. Disabling DESTALINATOR_LOG_TO_CHANNEL for now.
According to @TheConnMan's instructions in #124.
๐ฌ โ ๐
We changed our Slack workspace name and New Channel posts stopped. After I finally got around to changing the name in Destalinator it started working again. The archiving worked fine with the old name.
Responses from users.list
and channels.list
are limited to a specific number of returned objects.
Pagination must be implemented to ensure all users and channels are accounted for.
Right now, the only way to change the warner and closure text, is to change the warning.txt and\or closure.txt files. This should come from the config.
2017-06-01T14:03:14.674115+00:00 app[clock.1]: File "/app/.heroku/python/lib/python2.7/site-packages/apscheduler/executors/base.py", line 112, in run_job
2017-06-01T14:03:14.674128+00:00 app[clock.1]: retval = job.func(*job.args, **job.kwargs)
2017-06-01T14:03:14.674129+00:00 app[clock.1]: File "scheduler.py", line 24, in destalinate_job
2017-06-01T14:03:14.674130+00:00 app[clock.1]: scheduled_warner.warn()
2017-06-01T14:03:14.674131+00:00 app[clock.1]: File "/app/warner.py", line 11, in warn
2017-06-01T14:03:14.674132+00:00 app[clock.1]: self.ds.warn_all(self.config.warn_threshold, force_warn)
2017-06-01T14:03:14.674132+00:00 app[clock.1]: File "/app/destalinator.py", line 243, in warn_all
2017-06-01T14:03:14.674133+00:00 app[clock.1]: if self.warn(channel, days, force_warn):
2017-06-01T14:03:14.674134+00:00 app[clock.1]: File "/app/destalinator.py", line 162, in warn
2017-06-01T14:03:14.674135+00:00 app[clock.1]: if self.slacker.channel_has_only_restricted_members(channel_name):
2017-06-01T14:03:14.674136+00:00 app[clock.1]: print("mids for {} is {}".format(channel_name, mids))
2017-06-01T14:03:14.674136+00:00 app[clock.1]: File "/app/slacker.py", line 165, in channel_has_only_restricted_members
2017-06-01T14:03:14.674137+00:00 app[clock.1]: UnicodeEncodeError: 'ascii' codec can't encode character u'\u0ca0' in position 0: ordinal not in range(128)```
https://api.slack.com/changelog/2020-11-no-more-tokens-in-querystrings-for-newly-created-apps
On February 24, 2021, we will stop allowing newly created Slack apps to send requests to Web API methods with access tokens presented in a URL query string. Instead, apps must send tokens in the Authorization HTTP header or alternatively as a URL-encoded POST body parameter.
Just when I finally want to try the destalinator!
@TheConnMan's docker setup needs us to tag things to work correctly... I think we could work through the outstanding bugs and make a 1.0 release.
It's underutilized and a frequent source of bugs. Can we get rid of it?
I installed Destalinator as a Docker image, configured the configuration.yaml file with the right slack_name, added the mandatory environment variables in a separate .env file (myvars.env) that I call when I run the container.
Then I noticed that nothing was happening in the Slack channels and the coverage was of 70%.
I thus went in the running Docker container and run ./warner.sh, which gives the following output
/destalinator # ./warner.py
2017-12-08 12:21:15,880 [DEBUG]: Logging to slack channel: destalinator-log
2017-12-08 12:21:15,881 [DEBUG]: activated is TRUE
2017-12-08 12:21:15,886 [INFO]: Starting new HTTPS connection (1): slacktest.slack.com #name changed
2017-12-08 12:21:16,004 [DEBUG]: "GET /api/users.list?token= HTTP/1.1" 200 1299
2017-12-08 12:21:16,005 [DEBUG]: All restricted user names:
2017-12-08 12:21:16,079 [DEBUG]: "GET /api/channels.list?exclude_archived=1&token= HTTP/1.1" 200 635
2017-12-08 12:21:16,080 [DEBUG]: activated is TRUE
2017-12-08 12:21:16,082 [INFO]: Warning
2017-12-08 12:21:16,082 [INFO]: *ACTION: Warning all channels stale for more than 30 days*
2017-12-08 12:21:16,082 [DEBUG]: Not warning #destalinator-log because it's in ignore_channels
2017-12-08 12:21:16,160 [DEBUG]: "GET /api/channels.info?token=&channel=C1EA01JR1 HTTP/1.1" 200 405
2017-12-08 12:21:16,209 [DEBUG]: "GET /api/channels.info?token=&channel=C1EA01JR1 HTTP/1.1" 200 405
2017-12-08 12:21:16,210 [DEBUG]: Current members in general are {'U1EA01HH9', 'U5FFN9XPA'}
2017-12-08 12:21:16,285 [DEBUG]: "GET /api/channels.history?oldest=1510143676&token=&channel=C1EA01JR1&latest= 1512735676 HTTP/1.1" 200 146
2017-12-08 12:21:16,286 [DEBUG]: Fetched 1 messages for #general over 30 days
2017-12-08 12:21:16,286 [DEBUG]: Filtered down to 1 messages based on included_subtypes: bot_message, channel_name, channel_purpose, channel_topic, file_mention, file_share, me_message, message_replied, reply_broadcast, slackbot_response
2017-12-08 12:21:16,286 [DEBUG]: Purging cache for general
2017-12-08 12:21:16,358 [DEBUG]: "GET /api/channels.info?token=&channel=C1EA01K0B HTTP/1.1" 200 463
2017-12-08 12:21:16,430 [DEBUG]: "GET /api/channels.info?token=&channel=C1EA01K0B HTTP/1.1" 200 463
2017-12-08 12:21:16,431 [DEBUG]: Current members in random are {'U1EA01HH9', 'U5FFN9XPA'}
2017-12-08 12:21:16,477 [DEBUG]: "GET /api/channels.history?oldest=1510143676&token=&channel=C1EA01K0B&latest= 1512735676 HTTP/1.1" 200 94
2017-12-08 12:21:16,478 [DEBUG]: Fetched 0 messages for #random over 30 days
2017-12-08 12:21:16,478 [DEBUG]: Filtered down to 0 messages based on included_subtypes: bot_message, channel_name, channel_purpose, channel_topic, file_mention, file_share, me_message, message_replied, reply_broadcast, slackbot_response
2017-12-08 12:21:16,553 [DEBUG]: "GET /api/channels.info?token=&channel=C1EA01K0B HTTP/1.1" 200 463
2017-12-08 12:21:16,553 [DEBUG]: Current members in random are {'U1EA01HH9', 'U5FFN9XPA'}
2017-12-08 12:21:16,553 [DEBUG]: Returning 0 cached messages for #random over 30 days
Traceback (most recent call last):
File "./warner.py", line 18, in <module>
Warner().warn(force_warn=force_warn)
File "./warner.py", line 11, in warn
self.ds.warn_all(self.config.warn_threshold, force_warn)
File "/destalinator/destalinator.py", line 235, in warn_all
if self.warn(channel, days, force_warn):
File "/destalinator/destalinator.py", line 217, in warn
self.post_marked_up_message(channel_name, self.warning_text, message_type='channel_warning')
File "/destalinator/destalinator.py", line 105, in post_marked_up_message
self.slacker.post_message(channel_name, self.add_slack_channel_markup(message), **kwargs)
File "/destalinator/slacker.py", line 259, in post_message
post_data['attachments'] = json.dumps([{'fallback': message_type}], encoding='utf-8')
File "/usr/local/lib/python3.6/json/__init__.py", line 238, in dumps
**kw).encode(obj)
TypeError: __init__() got an unexpected keyword argument 'encoding'
I removed the api_token from the above output and changed the slack workspace name.
As you can see it seems like it connect to the desired Slack Workspace, but it throws an error at the end (which might be related to the fact that there are 0 cached messages in the last 30 days?).
I run the docker container with the command
docker run --env-file myvars.env -it -d -p 8080:80 --name=destalinator <image name> sh -c "coverage html --skip-covered && python -m http.server 80"
We're instantiating the config object a few times. It should be possible to instantiate one inside config module and use that everywhere else.
I saw this interaction happen in #intros and thought it was wonderful.
user1
... some introduction, including details of job ...
thanks :simple_smile:user2
Also, BTW, you and @user3 are both basically doing the same thing, so you might enjoy chatting.
Adding a feature where users can tag themselves could potentially improve onboarding and cross-pollination of existing users.
slackbot
hi! welcome to rands-leadership. as part of our member matching, we'd love you to tag yourself (using terms like ecommerce, founder, etc.) to help match you with other users. for more information, type@randsbot matcher help
.@randsbot matcher help
slackbot
Hi @sheeley, the matcher command is used to help connect you to users who are interested in discussing similar topics. Try these commands:
- list - list all current tags
- add [ecommerce, founders, tag3, ...] - associate yourself with certain tags
- remove [ecommerce, tag2, ...] - remove association with tags
- match - show a random match with similar interests
- channel [tag] - create a channel and invite people tagged with [tag]
This is just spitballing, and full credit is due to @royrapoport. I'd love to see suggestions/changes, and happy to help implement once the draft is solidified (if the community feels this would be worthwhile).
Ignore me.
I'm seeing this warning even though the ๐พ emoji does exist for my Slack team. I've even set it as a reaction to a Slack message just in case that was why it was complaining, but it persists:
WARNING: Flagger attempted to use an emoji that does not exist. Please add the floppy_disk emoji.
https://hub.docker.com/r/randsleadershipslack/destalinator/ returns a 404.
Can we make this a fully fledged service with an add to slack button? https://api.slack.com/docs/slack-button
When I got Destalinator up and running in Docker someone asked if I wanted to run it in Lambda with a cron. I thought it was an interesting idea, though the 5 min execution limit might be an issue for larger teams. As long as we get all the env vars working creating a .zip which users could upload to Lambda should be pretty simple. Then we can reuse the RUN_ONCE flag to execute once on a nightly cron.
With the current version of the API, the scopes only allow User Token Scopes to read channel history, but only allow Bot Token Scopes to post messages.
As far as I can tell, the only workaround is to now set both the user and bot tokens of the app.
It would be nice to have an option to tell destalinator which channel to send summaries to. We changed #general to #announcements in our instance.
warn_in_general uses a hardcoded channel name
Right now we post to general_message_channel
with:
Hey, heads up -- the following channels are stale and will be archived if no one participates in them over the next 30 days: โฆ
But that โ30 daysโ is hard-coded, and should instead be computed as (archive_threshold - warning_threshold)
.
ValueError: No JSON object could be decoded
File "scheduler.py", line 36, in destalinate_job
warner.Warner().warn()
File "/app/warner.py", line 11, in warn
self.ds.warn_all(self.config.warn_threshold, force_warn)
File "/app/destalinator.py", line 234, in warn_all
if self.stale(channel, days):
File "/app/destalinator.py", line 112, in stale
if not self.channel_minimum_age(channel_name, days):
File "/app/destalinator.py", line 56, in channel_minimum_age
info = self.slacker.get_channel_info(channel_name)
File "/app/slacker.py", line 188, in get_channel_info
ret = self.session.get(url).json()
File "requests/models.py", line 808, in json
return complexjson.loads(self.text, **kwargs)
File "json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "json/decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
Hi,
I installed destalinator using the docker image and I noticed that, when a channel is archived, the corresponding message does not come from destalinator but from my user.
The warning messages come, as desired, from destalinator and the messages for the new channels created come from slackbot.
I tried to make the archiving messages come from destalinator as well by adding this bot to my Slack app and using the bot's API TOKEN. However, when I do so, running warner.py will give the following error:
# ./warner.py 2017-12-13 09:59:34,277 [DEBUG]: activated is TRUE 2017-12-13 09:59:34,285 [INFO]: Starting new HTTPS connection (1): slacktest.slack.com 2017-12-13 09:59:34,418 [DEBUG]: "GET /api/users.list?token=xoxb-285445889489-DnuE3IcKoJ0Y7qaANRkNtak9 HTTP/1.1" 200 1466 2017-12-13 09:59:34,420 [DEBUG]: All restricted user names: 2017-12-13 09:59:34,488 [DEBUG]: "GET /api/channels.list?exclude_archived=1&token=xoxb-285445889489-DnuE3IcKoJ0Y7qaANRkNtak9 HTTP/1.1" 200 711 2017-12-13 09:59:34,490 [DEBUG]: activated is TRUE 2017-12-13 09:59:34,492 [INFO]: Warning 2017-12-13 09:59:34,492 [INFO]: *ACTION: Warning all channels stale for more than 30 days* 2017-12-13 09:59:34,492 [DEBUG]: Not warning #destalinator-log because it's in ignore_channels 2017-12-13 09:59:34,562 [DEBUG]: "GET /api/channels.info?token=xoxb-285445889489-DnuE3IcKoJ0Y7qaANRkNtak9&channel=C1EA01JR1 HTTP/1.1" 200 335 2017-12-13 09:59:34,637 [DEBUG]: "GET /api/channels.info?token=xoxb-285445889489-DnuE3IcKoJ0Y7qaANRkNtak9&channel=C1EA01JR1 HTTP/1.1" 200 335 2017-12-13 09:59:34,638 [DEBUG]: Current members in general are set([u'U5FFN9XPA', u'U1EA01HH9']) 2017-12-13 09:59:34,696 [DEBUG]: "GET /api/channels.history?oldest=1510567174&token=xoxb-285445889489-DnuE3IcKoJ0Y7qaANRkNtak9&channel=C1EA01JR1&latest=1513159174 HTTP/1.1" 200 99 Traceback (most recent call last): File "./warner.py", line 18, in <module> Warner().warn(force_warn=force_warn) File "./warner.py", line 11, in warn self.ds.warn_all(self.config.warn_threshold, force_warn) File "/destalinator/destalinator.py", line 234, in warn_all if self.stale(channel, days): File "/destalinator/destalinator.py", line 121, in stale messages = self.get_messages(channel_name, days) File "/destalinator/destalinator.py", line 83, in get_messages messages = self.slacker.get_messages_in_time_range(oldest, cid) File "/destalinator/slacker.py", line 91, in get_messages_in_time_range messages += payload['messages'] KeyError: 'messages'
Some channels don't have frequent new topics but do have active discussions in existing threads. These are not picked up by Destalinator resulting in the warning appearing in the channel followed by a less than on topic update to stop the closure of an overwise active channel.
Python programs usually follow a different structure than this project, which makes it a little confusing to maintain. Moving the .py from the root to a new subdir called destalinator would help significantly.
I'm testing out destalinator on our Slack, and it hits the rate limit immediately. Is this a known issue? Is it because I'm using a personal access token, as recommended in the README?
Edit: here's the log output:
Destalinating
destalinator_activated is False
output_debug_to_slack_flag is True
destalinator_activated is False
destalinator_activated is False
ERROR:apscheduler.executors.default:Job "destalinate_job (trigger: cron[hour='*', minute='*/1'], next run at: 2017-04-19 21:01:00 UTC)" raised an exception
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/apscheduler/executors/base.py", line 112, in run_job
retval = job.func(*job.args, **job.kwargs)
File "scheduler.py", line 20, in destalinate_job
scheduled_archiver = archiver.Archiver()
File "/app/executor.py", line 27, in __init__
self.slacker = slacker.Slacker(config.SLACK_NAME, token=api_token)
File "/app/slacker.py", line 20, in __init__
self.get_users()
File "/app/slacker.py", line 36, in get_users
payload = requests.get(url).json()['members']
File "/app/.heroku/python/lib/python2.7/site-packages/requests/models.py", line 808, in json
return complexjson.loads(self.text, **kwargs)
File "/app/.heroku/python/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/app/.heroku/python/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/app/.heroku/python/lib/python2.7/json/decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
It does correctly say that I'm running it every minute, but this happens even if I wait a long time between invocations. It hasn't successfully run to repeat yet.
It looks like changes to the Slack API have made the destalinator non-functional. (At least, when I tried to run it against our slack, it didn't work.) I've tried applying a couple of the recent PRs, and although they're closer to working, they're still not functional.
Does anyone have a working version? If so, could a PR be created (and accepted)?
Hi there, I was attempting to use this slack bot to clean up some old channels in our organization but it seems that the two main files don't exist anymore in master?
Is this actively being maintained and if so are there plans to add the default functionality back to the master branch?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.