GithubHelp home page GithubHelp logo

alerta / alerta-contrib Goto Github PK

View Code? Open in Web Editor NEW
119.0 21.0 169.0 1.66 MB

Contributed integrations, plugins and custom webhooks

Home Page: http://alerta.io

License: MIT License

Python 96.46% PHP 2.44% Shell 0.40% Jinja 0.69%
zabbix syslog prometheus monitoring python webhook plugin nagios riemann

alerta-contrib's Introduction

Alerta Contrib

Useful but non-essential additions to the alerta monitoring system.

Integrations are specific to the monitoring tool or service being integrated whereas plugins are standard extensions that are triggered before or after alert reception or by an external alert status change.

Some of the integrations listed below redirect to a dedicated Github repository.

Integrations

Plugins

Webhooks

Tests

To run the tests using a local Postgres database run:

$ pip install -r requirements-dev.txt
$ createdb test5
$ ALERTA_SVR_CONF_FILE= DATABASE_URL=postgres:///test5 pytest -v webhooks/*/test*

License

Copyright (c) 2014-2020 Nick Satterly and AUTHORS. Available under the MIT License.

alerta-contrib's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alerta-contrib's Issues

alerta does not start with PLUGIN = ['zabbix']

I'm trying to configure this plugin but after adding PLUGIN = ['zabbix'] alerta does not start.
changed it to ['zabbix-alerta'] and at least it start but the scritp does not run.
Added DEBUG = True but I'm not seeing any meaningful logs at /var/log/messages. I've also tried to specify a log file but after adding LOG_FILE = /var/log/alerta.log alerta does not start.

In additional this script only works if you have only one zabbix server. I changed it to grab the environment variable from the alerta's ticket (alert_env = alert.attributes.get('environment', None)) and each environment was added as a zabbix server alias in the local hosts file. I hope this will do the trick

Add README for all plugins

Add READMEs for the following plugins:

  • amqp
  • enhance
  • geoip
  • hipchat
  • influxdb
  • logstash
  • normalise
  • pagerduty
  • prometheus
  • pushover
  • slack
  • sns
  • syslog
  • twilio sms

snmptrap handler api failure

sending a snmp trap from the example outputs

2016-12-18 11:43:18,014 - alertaclient.api: DEBUG - Response Body: {
  "message": "Timeout must be an integer",
  "status": "error"
}

on request

2016-12-18 11:43:18,006 - urllib3.connectionpool: DEBUG - "POST /alert HTTP/1.1" 400 68
2016-12-18 11:43:18,007 - alertaclient.api: DEBUG - Request Headers: CaseInsensitiveDict({'Content-Length': '830', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.1 CPython/2.7.6 Linux/3.19.0-77-generic', 'Content-Type': 'application/json', 'Authorization': 'Key '})
2016-12-18 11:43:18,007 - alertaclient.api: DEBUG - Request Body: {"status": "unknown", "origin": "alerta-snmptrap/alerta", "resource": "localhost", "severity": "normal", "correlate": [], "tags": ["SNMPv2c"], "text": "Enterprise Specific", "createTime": "2016-12-18T11:43:17.000Z", "value": "6", "event": "iso.3.6.1.6.3.1.1.5.3.0", "environment": "Production", "customer": null, "service": ["Network"], "rawData": "$a 0.0.0.0\n$A 0.0.0.0\n$s 1\n$b UDP: [127.0.0.1]:39602->[127.0.0.1]:162\n$B localhost\n$x 2016-12-18\n$X 11:43:17\n$N .\n$q 0\n$P TRAP2, SNMP v2c, community public\n$t 1482061397\n$T 0\n$w 0\n$W Cold Start\niso.3.6.1.2.1.1.3.0 0:1:46:17.12~%~iso.3.6.1.6.3.1.1.4.1.0 iso.3.6.1.6.3.1.1.5.3.0~%~ccitt.0 \"This is a test linkDown trap\"\n", "timeout": null, "attributes": {"source": "localhost"}, "group": "SNMP", "type": "snmptrapAlert", "id": "cbe72a3b-506d-4244-bc31-e8805c43374b"}

influxdb timestamp issue on ack

I am still digging into this but it appears that when you ack an alert, the influxdb plugin uses the "Last Receive Time" for the time point of the ack. This causes 2 entries in influx with the same timestamp, one for when the alert was opened and one for the ack. The actual timestamp of the ack does not get inserted.

Here is an example:

> select * from event where service = 'DataBase'
name: event
time                     environment event          resource    service  severity status text                                          value
----                     ----------- -----          --------    -------  -------- ------ ----                                          -----
2018-04-03T18:13:16.539Z Prod      Test CPU Alert SERVER01 DataBase critical open                                                 cpu 49 % is over threshold


This is after doing an ack on the alert a couple of minutes later...notice the timestamps:

> select * from event where service = 'DataBase'
name: event
time                     environment event          resource    service  severity status text                                          value
----                     ----------- -----          --------    -------  -------- ------ ----                                          -----
2018-04-03T18:13:16.539Z Prod      Test CPU Alert SERVER01 DataBase critical ack    bulk status change via console by Admin cpu 49 % is over threshold
2018-04-03T18:13:16.539Z Prod      Test CPU Alert SERVER01 DataBase critical open                                                 cpu 49 % is over threshold

Cachet plugin - Visible tag

Is possibile to add the possibility to overwrite default VISIBLE status when an alert from Alerta is sent to cachet?
My problem is that each alert is being publish with tag VISIBLE=True but I'd like to discriminate if an issue must be set VISIBLE=True (for example if alert is a functionality deserver) or must be set VISIBLE=False (for example if alert is just an internal alert, for technician team or something similar).

What I expect
My idea is to add, in Alerta, a tag called VISIBLE with different boolean value for each alert. In this way Cachet plugin can read this value and add incident to public dashboard or to private dashboard.

Sorry for my English,

Thank you
Fabius87

Mailer resets connection to localhost repeatedly

I'm sure this is a config issue but when I look at the CLI for alerta-mailer I see these lines repeatedly.

INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: localhost
Resetting dropped connection: localhost
DEBUG:requests.packages.urllib3.connectionpool:"POST /heartbeat HTTP/1.1" 201 484
"POST /heartbeat HTTP/1.1" 201 484

It passes the setup.py test and it says it connects successfully to mongo on load. My config looks like this:

[DEFAULT]
#timezone = United States/New York
# output = json
profile = production

[profile production]
endpoint = http://alerta.domain.com/api
#key = demo-key

[profile development]
endpoint = http://localhost:8080
debug = yes

[alerta-mailer]
#key = demo-key
mail_to = [email protected]
mail_from = [email protected]
amqp_url = mongodb://localhost:27017/kombu
dashboard_url = http://localhost
smtp_password = the accout password for the mail from above
debug = True
skip_mta = False
email_type = text

Any help would be awesome, thanks!

Slack notification on acknowledgment

It would be nice to have an option to send out a notification upon acknowledgement of a problem. I can prepare a PR for this if there are no objections?

I add new Otions to URLMON

Here My Code:

I add options to add new Resource without to Stop urlmon.py

web.json

{
        "1":{
            "resource": "www.svtcloud.com",
            "url": "https://www.svtcloud.com",
            "environment": "urlmon",
            "customer": "SVT",
            "service": ["SVTCloud", "Monitoring"],
            "tags": ["SVTCloud", "WEB"]
        },

    "2":{
        "resource": "hacking-etico.com",
        "url": "http://hacking-etico.com",
        "environment": "urlmon",
        "customer": "SVT",
        "service": ["Hacking-Etico", "Monitoring"],
        "tags": ["Blog", "SVTCloud", "WEB"]
    },

    "3":{
        "resource": "tacol.svtcloud.com",
        "url": "https://tacol.svtcloud.com",
        "environment": "urlmon",
        "customer": "SVT",
        "service": ["MSOC Intranet", "Monitoring"],
        "tags": ["Confluence", "Intranet", "SVTCloud", "MSOC", "WEB"]
    },

    "4":{
        "resource": "www.semic.es",
        "url": "http://www.semic.es",
        "environment": "urlmon",
        "customer": "SVT",
        "service": ["SEMIC WEB", "Monitoring"],
        "tags": ["SEMIC", "MSOC", "WEB"]
    },
"5":{
    "resource": "www.kateosllll.es",
        "url": "http://www.s3kateoslll.es",
        "environment": "urlmon",
        "customer": "SVT",
        "service": ["SEMIC WEB", "Monitoring"],
        "tags": ["SEMIC", "MSOC", "WEB"]
}

}

Now I made change into URLMON.py
Change only :

class UrlmonDaemon(object):

def init(self):

    self.shuttingdown = False
    self.webs = self.loadSetting("web.json")

def loadSetting(self, filepath):
    import json
    try:
        l = []
        with open(filepath, 'rb') as f:
            d = json.loads(f.read(), 'utf-8')
            for k in d:
                l.append(d[k])

        f.close()
        return l
    except TypeError as e:
        print  (e)
        sys.exit(2)
    except ValueError as e:
        print (e)
        sys.exit(2)

def run(self):

    self.running = True
    self._cached_stamp = 0
    self.filename = "web.json"
    self.queue = Queue.Queue()
    self.api = self.api = ApiClient(endpoint=settings.ENDPOINT,
                                    key=settings.API_KEY)

    # Start worker threads
    LOG.debug('Starting %s worker threads...', SERVER_THREADS)
    for i in range(SERVER_THREADS):
        w = WorkerThread(self.queue, self.api)
        try:
            w.start()
        except Exception, e:
            LOG.error('Worker thread #%s did not start: %s', i, e)
            continue
        LOG.info('Started worker thread: %s', w.getName())

    while not self.shuttingdown:
        try:
            # Check change into web settings json file
            stamp = os.stat(self.filename).st_mtime
            if stamp != self._cached_stamp:
                self._cached_stamp = stamp
                c = self.loadSetting(self.filename)
                self.webs = c
            else:
                c = self.webs
            #for check in settings.checks:
            for check in c:
                self.queue.put((check, time.time()))

            LOG.debug('Send heartbeat...')
            heartbeat = Heartbeat(tags=[__version__])
            try:
                self.api.send(heartbeat)
            except Exception, e:
                LOG.warning('Failed to send heartbeat: %s', e)

            time.sleep(LOOP_EVERY)
            LOG.info('URL check queue length is %d', self.queue.qsize())

        except (KeyboardInterrupt, SystemExit):
            self.shuttingdown = True

    LOG.info('Shutdown request received...')
    self.running = False

    for i in range(SERVER_THREADS):
        self.queue.put(None)
    w.join()

Slack enabled but seems not working

I added slack to the plugin list (by adjusting PLUGINS value) and added SLACK_WEBHOOK_URL environment variable. From the server log I could see that Slack is already loaded:

2016-05-26 15:20:06,836 - alerta.app[22523]: INFO - Server plug-in 'normalise' enabled.
2016-05-26 15:20:06,839 - alerta.app[22523]: INFO - Server plug-in 'reject' enabled.
2016-05-26 15:20:06,842 - alerta.app[22523]: INFO - Server plug-in 'slack' enabled.
2016-05-26 15:20:06,845 - alerta.app[22523]: INFO - Server plug-in 'normalise' enabled.
2016-05-26 15:20:06,847 - alerta.app[22523]: INFO - Server plug-in 'reject' enabled.
2016-05-26 15:20:06,850 - alerta.app[22523]: INFO - Server plug-in 'slack' enabled.

But after adding alerts to alerta, I never see any message posted to the desired slack channel.

telegram bot behind proxy

Hi. Here in Russia we should use proxies for tg. So, teleport says, that:
nickoala/telepot#309
Can I handle this in some way? We'r using docker version of alerta, plugins is installing after every deploy. I thinking about put plugin code with changes to our local git and install to alerta from it. But.. it's no so exquisitely (m.b. I missed something?). will be grateful for help

upd: teleport does not honor HTTPS_PROXY if i understand correctly (I tried): nickoala/telepot#294

Slack plugin ignores blackout settings?

It seems like the Slack plugin sends a message for all incoming alerts, regardless if that alert matches a blackout rule. Even with NOTIFICATION_BLACKOUT enabled.

Remove or pin use of a deprecated python package, google-cloud

https://github.com/alerta/alerta-contrib/search?q=google-cloud&unscoped_q=google-cloud

This repository contains a setup.py or requirements.txt file that depends on
the google-cloud package. This package is deprecated. If you wish to continue
to use this package, please pin the version to google-cloud<0.34.0. If you have
more time to dedicate to this, I would advise sorting out which sub-packages you
actually rely on and depending on them.

For further information please view this package on PyPI, https://pypi.org/project/google-cloud/

influxdb empty values for text and value causes error

I have an alert source that does not send any text or value when it closes an alert. This causes the insertion to fail for influx with an "invalid field format" error. It seems like setting the text to a space will fix the issue...not sure if that is the best way to deal with this as influx does not allow for null values to be sent as fields.

I guess the other option is to insure that all alert sources always send text or a value...if that is the direction then a more detailed error message from the influx plugin may be all that is needed.

Invalid syntax error python3

Hi,

I am giving a try to alerta which seems to be a very good solution for alerting aggregation and processing, thanks for you work.

I succeed to set up an alerta server 4.10.4 run by uwsgi and alerta client 5.0.1 which can query and send alert to the server without problem.

Now I can manage and create alert (in my case from kapacitor) I would like to set up the very first useful plugin on alerte : mail notification on alert.

I installed alerta-amqp as it is a requirement and seems right loaded at server start. Then I installed alerta-mailer from pip (as docuemented and as for alerta-amqp).

But when I try to run "alerta-mailer" from command line I got an error on syntax :

Traceback (most recent call last): File "/usr/bin/alerta-mailer", line 11, in <module> load_entry_point('alerta-mailer==3.5.0', 'console_scripts', 'alerta-mailer')() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 570, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2751, in load_entry_point return ep.load() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2405, in load return self.resolve() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2411, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python3.6/site-packages/mailer.py", line 299 except (socket.error, socket.herror, socket.gaierror), e: ^ SyntaxError: invalid syntax

I use python 3.6.2, may be should I try python2 but in this case should I have the rest of the stack (client, server, amqp) also in python2 ?

Thanks for your help

mailer alerts are in some cases never sent when alert is cleared

The actual design of filtering the alerts (on_message function) works well in the basic use case of alerts going from critical/major to normal/ok/cleared.
But it does not work well with alerts transitionning through warning, and informational state, because it only considers cleared alerts coming from a previous state in critical or major.

For example : I am using alerts from kapacitor which sends alert of type info, warn and critical depending on the server load. Server load is not something that will go straight from critical to ok because that takes time.
So if the load is increasing, I'll get an Open/critical mail alert, when it calms down I'll get an Open/warning alert, but I'll never get a Closed/info, Closed/ok or Closed/cleared email alert, because the mailer integrations does not take into account those transitions.

I am working on a patch to fix that, but if you have recommandations on what should be done to do it for most people use-cases, I am all ears !
I am thinking on using the history of the alert to decide if we were previously in an alerting state or not.
That seems the better thing to do, as we don't want to send mail (but perhaps some will ? perhaps a config option ?) for alert trnsitioning from normal => informational => normal.
Perhaps a minimum alert threshold ?

urlmon number of retries

Can urlmon be configured to only create alert after X attempts of failure ? i see 'count' option for check def, but i cant figure out what that does..

Feature: Integrate with linekdin/oncall ?

Is your feature request related to a problem? Please describe.
Is possible to integrate with https://github.com/linkedin/oncall ?

Describe the solution you'd like
Send alert to person, which is currently in emergency mode/oncall -> managed in oncall application

Describe alternatives you've considered
PagerDuty, VictorOps, OpsGeniue -> all of them are complete solution. With this integration will be possible to aproach to these allinone solutions.

Logstash plugin broken (Python3)

Hi,

I am trying to use the logstash plugin, but found out there are some problems with it :

  • It does not convert the LOGSTASH_PORT env variable to int (I use the Docker version of alerta-web, so I configured it using env variables)
  • The socket.send method want bytes and not string (seems to be a major change in Python3)

I have workaround the first problem, and even the second one (I'll do a PR when all is working fine), but I am having a hard time to understand what exactly should be sent to logstash...

At first, the logstash plugin was sending this :
self.sock.send("%s\r\n" % alert)
So I changed it to :
self.sock.send(b"%s\r\n" % bytes(str(alert), 'utf-8'))
It works well, but on the logstash/elasticsearch side, what I get is documents with a "message" attribute consisting of :
Alert(id='449b842a-cb94-420a-a53b-48eae9836816', environment='devel', resource='bacula', event='SystemLoad', severity='informational', status='open', customer=None)

Which of course does not really look like JSON documents...

Is it really what should be sent to logstash/ES ? I was expecting logstash to receive some JSON doc it would be able to parse.
Is there somewhere a more complete documentation ? (the only one I found is https://github.com/alerta/kibana-alerta/blob/master/README.md)

Thanks for your help.

Slack plugin doesn't work

Hi Dev,

Slack plugin does not work with following config

in ~/.alertad.conf

CORS_ORIGINS = [
'http://10.71.1.6'
]

SLACK_WEBHOOK_URL = 'https://hooks.slack.com/services/#######################'

PLUGINS = ['reject' , 'slack']

I get this error when i launch alertad server
root@devops-infra-003 slack]# alertad
[2016-09-29 14:06:39,425] ERROR in init: Server plug-in 'slack' could not be loaded: No module named alerta_slack
[2016-09-29 14:06:39,437] ERROR in init: Server plug-in 'slack' could not be loaded: No module named alerta_slack

  • Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
    10.70.30.60 - - [29/Sep/2016 14:06:40] "GET /environments?status=open&status=unknown HTTP/1.1" 200 -
    10.70.30.60 - - [29/Sep/2016 14:06:40] "GET /alerts/count?status=open&status=unknown HTTP/1.1" 200 -
    10.70.30.60 - - [29/Sep/2016 14:06:40] "GET /alerts?status=open&status=unknown HTTP/1.1" 200 -

But i have the alerta-slack pugin installed on my machine :
[root@devops-infra-003 alerta]# pip freeze
alerta==4.8.1
alerta-server==4.8.5
alerta-slack==0.3.1

Thanks ,
Onkar

"Run Book Url" from zabbix

Hello,

I'm sending attributes.runBookUrl={TRIGGER.URL} in my Actions/Operations/Default message (in zabbix)

But the field Run Book Url is not correctly filled with this information.

image

How can I use this field? There is another name to use it?

Note: The field attributes.moreInfo works well!

Thanks!

AMQP Plugin serialization issue (again)

Hi,

For some reason messages are now being sent with content-type 'text/plain' instead of 'application/json'. This is causing Mailer to fail with "ContentDisallowed: Refusing to deserialize untrusted content of type text/plain" error. It would seem that the change turns single quotes into double quotes, which causes Kombu producer to fail to recognize json. Or something like that (not much of a programmer myself)

body = alert.serialize:
2017-10-30 16:12:34,548 - alerta.plugins.amqp[25936]: DEBUG - EFORE: {'repeat': True, 'history': [], 'rawData': "{'Err': None, 'Series': [{'tags': {'remote': 'XXXXXXXXXXXXXXXXXXXXXX.1', 'host': 'XXXXXXXXXXXXXXXXXXXXXX', 'stratum': '1', 'server_group': 'rabbitmq', 'server_group_env': 'prod'}, 'values': [['2017-10-30T15:07:31.907705404Z', 1.166]], 'name': 'ntpq', 'columns': ['time', 'stat']}], 'Messages': None}", 'type': 'exceptionAlert', 'id': 'ec53c40f-7c44-45e7-aa31-1e14fb4e4fcc', 'environment': 'prod', 'tags': [], 'customer': None, 'lastReceiveTime': datetime.datetime(2017, 10, 30, 15, 12, 34, 539000), 'previousSeverity': 'ok', 'href': 'XXXXXXXXXXXXXXXXXXXXXX/api/alert/ec53c40f-7c44-45e7-aa31-1e14fb4e4fcc', 'correlate': [], 'severity': 'informational', 'timeout': 86400, 'origin': 'Kapacitor', 'attributes': {'ip': 'XXXXXXXXXXXXXXXXXXXXXX'}, 'service': ['XXXXXXXXXXXXXXXXXXXXXX'], 'lastReceiveId': '54d785fa-4efa-4385-8909-c41f8d1aaff7', 'createTime': datetime.datetime(2017, 10, 30, 15, 8, 34, 461000), 'status': 'open', 'receiveTime': datetime.datetime(2017, 10, 30, 15, 8, 34, 485000), 'resource': 'XXXXXXXXXXXXXXXXXXXXXX (rabbitmq)', 'event': 'OS_NTP - XXXXXXXXXXXXXXXXXXXXXX.1/STR: 1 ', 'trendIndication': 'lessSevere', 'value': '1.17ms', 'duplicateCount': 8, 'text': 'INFO: NTP offset to XXXXXXXXXXXXXXXXXXXXXX.1/STR: 1 is at 1.17ms for host XXXXXXXXXXXXXXXXXXXXXX', 'group': 'os'} [in /opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/alerta_amqp-5.4.1-py3.5.egg/alerta_amqp.py:63]

body = json.dumps(alert.serialize, cls=DateEncoder):
2017-10-30 16:12:34,549 - alerta.plugins.amqp[25936]: DEBUG - AFTER: {"repeat": true, "history": [], "rawData": "{'Err': None, 'Series': [{'tags': {'remote': 'XXXXXXXXXXXXXXXXXXXXXX.1', 'host': 'XXXXXXXXXXXXXXXXXXXXXX', 'stratum': '1', 'server_group': 'rabbitmq', 'server_group_env': 'prod'}, 'values': [['2017-10-30T15:07:31.907705404Z', 1.166]], 'name': 'ntpq', 'columns': ['time', 'stat']}], 'Messages': None}", "type": "exceptionAlert", "id": "ec53c40f-7c44-45e7-aa31-1e14fb4e4fcc", "environment": "prod", "tags": [], "customer": null, "lastReceiveTime": "2017-10-30T15:12:34.539Z", "previousSeverity": "ok", "href": "XXXXXXXXXXXXXXXXXXXXXX/api/alert/ec53c40f-7c44-45e7-aa31-1e14fb4e4fcc", "correlate": [], "severity": "informational", "timeout": 86400, "origin": "Kapacitor", "attributes": {"ip": "XXXXXXXXXXXXXXXXXXXXXX"}, "service": ["XXXXXXXXXXXXXXXXXXXXXX"], "lastReceiveId": "54d785fa-4efa-4385-8909-c41f8d1aaff7", "createTime": "2017-10-30T15:08:34.461Z", "status": "open", "receiveTime": "2017-10-30T15:08:34.485Z", "resource": "XXXXXXXXXXXXXXXXXXXXXX (rabbitmq)", "event": "OS_NTP - XXXXXXXXXXXXXXXXXXXXXX.1/STR: 1 ", "trendIndication": "lessSevere", "value": "1.17ms", "duplicateCount": 8, "text": "INFO: NTP offset to XXXXXXXXXXXXXXXXXXXXXX.1/STR: 1 is at 1.17ms for host XXXXXXXXXXXXXXXXXXXXXX", "group": "os"} [in /opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/alerta_amqp-5.4.1-py3.5.egg/alerta_amqp.py:65]

from @davtex

How to install slack plugin?

I follow the general instruction (cd plugins/slack && python setup.py install) but this doesn't install slack plugin to plugin directory of alerta (https://github.com/guardian/alerta).

I tried to copy file slack.py directly to alerta/plugins, then update the variable PLUGINS to include slack but the server log show that it wasn't loaded after re-deploying.
A little confused on how to install these kind of plugin.

Syslog Integration

Using alerta version 4.7.26 there appears to be an issue with the syslog integration.
Installing on the Cloudformation Alerta distribution goes ok, but when running from /usr/local/bin/alerta-syslog the response is:

File "/usr/local/bin/alerta-syslog", line 9, in
load_entry_point('alerta-syslog==3.3.0', 'console_scripts', 'alerta-syslog')()
File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 558, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 2682, in load_entry_point
return ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 2355, in load
return self.resolve()
File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 2365, in resolve
raise ImportError(str(exc))
ImportError: 'module' object has no attribute 'main'

Twilio sms + multiple people

Add support for an array input in the twilio TWILIO_TO_NUMBER settings.
In the send code an iterator could be used over the array to support alerting multiple numbers.

Cloudwatch webhook in 4.8.4

I'm using Alerta Server 4.8.4 and I'm trying to setup the webhook for Cloudwatch integration. I have setup an SNS topic and subscription and verified that I can reach the Alerta endpoint API. I tried to send a subscription confirmation but the confirmation never comes back.

I found the cloudwatch webhook in the source and it does seem to respond to the confirmations - so then I poked around and realized that the 4.8.4 installation I have doesn't have a cloudwatch.py in the app/webhooks directory.

Is there something I have to do to add the webhook to my installation or enable it? The documentation suggests that it is a built-in webhook and doesn't indicate any further setup is required.

Allow per Environment config for plugins

Currently there is no way for the plugins to behave differently depending on some alert attributes, especially "environment" one. That would be great so staging-like could be monitored with the same alerta server as prod without pushing SMS notifications (by example)

Api gateway integration

I'm writing an integration in order to open incident on servicenow via a custom api gateway.
I managed to open the incident and add a custom attribute "servicenowID" to the alert.
If I close the alert from alerta webui I managed to close the incident on servicenow.
If I send the same alert with severity "normal" the status on alerta change to close but I didn't managed to close the incident on service now because the alert has not the attribute ServiceNowID.
How can I get attributes of a "parent" alert I'm updating when I send the same alert changing his severity?
Thanks
Jack

Add README for all integrations

Add READMEs for the following integrations:

  • cloudwatch
  • consul
  • mailer
  • opsweekly
  • pinger
  • snmptrap
  • supervisor
  • amazon sqs
  • syslog
  • urlmon

Alerta doesn't send silence to alertemanager

I have prometheus with alertmanager and alerta. I am trying to set up alerta Prometheus plugin.
config in /etc/alertad.conf
DEBUG = True
AUTH_REQUIRED = True
AUTH_PROVIDER = 'gitlab'
...
gitlab_settings
...
CORS_ORIGINS = [
'http://localhost',
'http://localhost:8000'
]
PLUGINS = ['reject','prometheus']
ALERTMANAGER_API_URL = 'http://localhost:9093'
ALERTMANAGER_SILENCE_DAYS = 1

When I change status from open to ack in alerta console nothing happened. There is nothing in logs. I don't see messages like "Server plug-in 'prometheus' found." I confirmed that alerta loads plugin(by putting ALERTMANAGER_SILENCE_DAYS as a string and it returned me an error).
I also tried to send ack with alerta cli. What else need to check or to configure?

Integrations broken after Python API changed

$ git grep api.send
integrations/consul/consulalerta.py:            response = api.send_alert(
integrations/consul/consulheartbeat.py:            print(api.send(hb))
integrations/pinger/pinger.py:                r = self.api.send(pingAlert)
integrations/pinger/pinger.py:                    self.api.send(heartbeat)
integrations/snmptrap/handler.py:                self.api.send(snmptrapAlert)
integrations/snmptrap/handler.py:            self.api.send(heartbeat)
integrations/supervisor/evlistener.py:            api.send(supervisorAlert)
integrations/syslog/syslogfwder.py:                                self.api.send(alert)
integrations/syslog/syslogfwder.py:                        self.api.send(heartbeat)
integrations/urlmon/urlmon.py:                self.api.send(urlmonAlert)
integrations/urlmon/urlmon.py:                    self.api.send(heartbeat)

Snmp Python Upgradation

I am not able to integrate SNMP with Alerta 6.6 as Alerta requires python 3 whereas SNMP is build on python 2.
Can you please tell me if python upgradation for SNMP will be done by you or if I have to do it.
Below is the screenshot of the error i am getting.

image

snmptrap - NameError: global name 'Alert' is not defined

I don't receive any trap message in Alerta , So I run the snmptrapd in debug mode where I see , this exception message from alertasnmp

Logs:

2017-12-20 07:01:50,902 - alerta.snmptrap: DEBUG - $1 1:18:47:46.42
2017-12-20 07:01:50,902 - alerta.snmptrap: DEBUG - $2 IF-MIB::linkDown.0
2017-12-20 07:01:50,902 - alerta.snmptrap: DEBUG - $3 "This is a test linkDown trap"
2017-12-20 07:01:50,902 - alerta.snmptrap: DEBUG - varbinds = {u'DISMAN-EVENT-MIB::sysUpTimeInstance': u'1:18:47:46.42', u'SNMPv2-SMI::zeroDotZero': u'"This is a test linkDown trap"', u'SNMPv2-MIB::snmpTrapOID.0': u'IF-MIB::linkDown.0'}
2017-12-20 07:01:50,902 - alerta.snmptrap: DEBUG - trapvars = {u'$N': u'.', '$O': u'IF-MIB::linkDown.0', '$1': u'1:18:47:46.42', u'$X': u'01:31:50', '$3': u'"This is a test linkDown trap"', u'$b': u'UDP: [10.11.12.169]:49858->[10.11.12.147]:162', u'$T': u'0', u'$B': u'', '$#': '3', u'$A': u'0.0.0.0', u'$x': u'2017-12-20', u'$W': 'Link Down', u'$w': '2', u'$t': u'1513733510', u'$a': u'0.0.0.0', '$2': u'IF-MIB::linkDown.0', u'$s': 'SNMPv2c', u'$P': u'TRAP2, SNMP v2c, community public', u'$q': u'0'}
2017-12-20 07:01:50,903 - alerta.snmptrap: INFO - SNMPv2c-Trap-PDU IF-MIB::linkDown.0 from at 2017-12-20 01:31:50
2017-12-20 07:01:50,903 - alerta.snmptrap: ERROR - global name 'Alert' is not defined
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/handler.py", line 190, in main
SnmpTrapHandler().run()
File "/usr/lib/python2.7/site-packages/handler.py", line 37, in run
snmptrapAlert = SnmpTrapHandler.parse_snmptrap(data)
File "/usr/lib/python2.7/site-packages/handler.py", line 163, in parse_snmptrap
snmptrapAlert = Alert(
NameError: global name 'Alert' is not defined

Custom plugin not being called

Hi! I'm trying to develop a plugin. This might not be an issue but I've tried copying and pasting existing ones and tweaking them for what I need but doesn't seem to work.
This is the setup.py file:
https://pastebin.com/085D1Uef

This is my script which basically issues a couple of HTTP requests:
https://pastebin.com/byAqLkpa

I installed the plugin running python setup.py install and added it in the PLUGINS variable of /etc/alertad.conf, along with my custom variable MYPLUGIN_API_BASE_URI like this:

PLUGINS = ['myplugin']
MYPLUGIN_API_BASE_URI = 'http://myurl.domain.com:81'

I have a TICK stack (influxdb) and the alerts are successfully pushed from Kapacitor to Alerta, but seems like my plugin doesn't get called.

Anything I'm overlooking here?
Thanks

mailer hangs when API not accessible

File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 522, in post
return self.request('POST', url, data=data, json=json, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, *_send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 596, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /heartbeat (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc0fc049250>: Failed to establish a new connection: [Errno 111] Connection refused',))

and stays like this.

mailer not sending emails

Alerta-mailer doesnt seem to be sending emails, or looking at the queue, only heartbeats are sent.

Zabbix bi-directional plugin needed

Akshay@designstack @ds-css Jun 18 17:09
Hi my alerta - zabbix environment is setup but when i acknowledge event on alerta , it does not affect zabbix . Please guide me how to do this .
Akshay@designstack @ds-css Jun 18 21:49
What is admin user and pswd ? How can i get it ?
Nick Satterly @satterly Jun 19 08:23
set ADMIN_USERS in the alertad.conf file to email / username of users that you want to be admins . see http://alerta.readthedocs.io/en/latest/configuration.html#authentication-settings
Akshay@designstack @ds-css Jun 19 09:24
Ok thanks . If i acknowledge event , i cannot see it acknowledged in zabbix . How can i do this ?
Nick Satterly @satterly Jun 19 09:26
Acknowledged in zabbix means the problem is resolved, doesn't it?
So that would be like "closed" in Alerta?
Akshay@designstack @ds-css Jun 19 09:27
Yes . When problem is not closed in alerta and i acknowledge it from there by pressing Ack button . Then i want it should close in zabbix
Nick Satterly @satterly Jun 19 09:30
There is currently only "uni-directional" integration with zabbix. I'm working on two-way now.
Akshay@designstack @ds-css Jun 19 09:30
Ok
Nick Satterly @satterly Jun 19 09:30
Closing an alert in Alerta should close the event in zabbix.
Akshay@designstack @ds-css Jun 19 09:31
Ok i am checking if this happens
I closed an alert in alerta but nothing happened in zabbix
Nick Satterly @satterly Jun 19 09:32
I should have said "will" not "should". It doesn't work yet.
Akshay@designstack @ds-css Jun 19 09:32
Ok .

Unable to set ALERTMANAGER_SILENCE_DAYS

I've been playing with the Prometheus plugin, and found that when I set ALERTMANAGER_SILENCE_DAYS to any value it fails.

The error I see is the following (grabbed with tcpdump):

    HTTP/1.1 500 INTERNAL SERVER ERROR
    Server: nginx/1.10.3 (Ubuntu)
    Date: Wed, 06 Dec 2017 13:10:34 GMT
    Content-Type: application/json
    Content-Length: 183
    Connection: keep-alive

    {
      "code": 500, 
      "errors": null, 
      "message": "Error while running status plug-in 'alerta_prometheus': unsupported type for timedelta days component: str", 
      "status": "error"
    }

I think the underlying issue is simply that the value for the variable defaults to an integer, but if read from an environment variable then it is a string - and so the python code fails.

As a secondary issue, it also appears that if this occurs, a 500 error is returned to the client, but there is no log saying what happened.

Plugins still using get_body()

Hi,

Thank you for this great tool, and the work you have done. A few (have not checked all yet) plugins are missing the fix you put in the AMQP plugin for the deprecated get_body() method. I have fixed this in the SNS plugin, and can fix the rest of them for you in a PR if you like.

Best,
-William

Question - What happens when RuntimeError is thrown from plugins

Hello,
What is the behavior when RuntimeError is thrown from any of the configured plugins?
Does it try again later or stop execution completely?
Does it try to call other plugins which are configured to send notifications?

I am noticing that once RuntimeError is thrown then rest of plugins are not called & I'm missing out on notifications completely. Is there a way to enable retries?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.