GithubHelp home page GithubHelp logo

awslogs's People

Contributors

blueyed avatar carlnordenfelt avatar charpeni avatar christophgysin avatar code0x58 avatar eduard93 avatar ejhayes avatar galenhuntington avatar grahamlyons avatar graingert avatar jorgebastida avatar juhani-hietikko avatar kevinburke1 avatar leandrodamascena avatar loghen41 avatar marcosdiez avatar nathanleiby avatar philipn avatar rwolfson avatar sj26 avatar stig avatar ticosax avatar timdp avatar timorantalaiho avatar tometzky avatar vlcinsky avatar wahabmk avatar wnkz avatar xpepper avatar ysung6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awslogs's Issues

Not all streams shown

Using the latest version of awslogs, I cannot see all streams in some log groups. For example, aws logs describe-log-streams --log-group-name <my-group-name> returns ten log streams, but awslogs streams <my-group-name> only returns two or three (and it's not consistent).

Similarly, there are some streams I can specify using awslogs get <my-group-name> <my-stream-name>, while other streams which do exist return No streams match your pattern '<my-stream-name>'.

Regardless of this, calling awslogs get <my-group-name> without specifying a stream name always seems to work and return all streams.

Any ideas?

Get just the logs

I send logs to CloudWatch in JSON format. When I use awslogs to fetch logs, I get the logs in the following format in console.

<group> <stream name> <{JSON}>

Is there a way I can just get JSON output in the console?

Allow GROUP globbing or expression

Hey @jorgebastida,
Thanks a lot for this library. I am just starting logging with cloudwatch and I am not sure I am missing something but for example, sometimes one of my groups is directly related to another group (i.e: A lambda function calling another one).

So in this case it would be great if I could be able to aggregate log results from more than one group let's say

awslogs get /aws/lambda/{auth,redis} --start='2h ago'

I would be quite nice to see those logs in chronologically sorted as well.

Is this by any change a good idea or would you recommend something else?

Thanks in advance.

awslogs groups -- timeout error

root@ee394150deb7:/usr/src/app# awslogs groups

================================================================================
You've found a bug! Please, raise an issue attaching the following traceback
https://github.com/jorgebastida/awslogs/issues/new
--------------------------------------------------------------------------------
Version: 0.0.1
Python: 2.7.9 (default, Apr 22 2015, 11:56:09)
[GCC 4.9.2]
boto version: 2.38.0
Platform: Linux-3.18.11-tinycore64-x86_64-with-debian-8.0
Config: {'color_enabled': True, 'aws_region': u'eu-west-1', 'aws_access_key_id': 'SENSITIVE', 'aws_secret_access_key': 'SENSITIVE', 'func': 'list_groups'}
Args: ['/usr/local/bin/awslogs', 'groups']

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/awslogs/bin.py", line 116, in main
    getattr(logs, options.func)()
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 245, in list_groups
    for group in self.get_groups():
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 257, in get_groups
    response = self.connection.describe_log_groups(next_token=next_token)
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 39, in aws_connection_wrap
    return getattr(self.connection, name)(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/boto/logs/layer1.py", line 268, in describe_log_groups
    body=json.dumps(params))
  File "/usr/local/lib/python2.7/site-packages/boto/logs/layer1.py", line 576, in make_request
    body=json_body)
JSONResponseError: JSONResponseError: 400 Bad Request
{u'message': u'Signature expired: 20150518T144117Z is now earlier than 20150518T150856Z (20150518T151356Z - 5 min.)', u'__type': u'InvalidSignatureException'}
================================================================================

Loop forever bug

[root@pl ~]# awslogs get /var/log/messages -s '2h' --aws-region='us-east-1'
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/gevent/greenlet.py", line 519, in run
result = self._run(_self.args, *_self.kwargs)
File "/usr/lib/python2.7/site-packages/awslogs/core.py", line 185, in _raw_events_queue_consumer
timestamp, line = self.raw_events_queue.peek(timeout=1)
File "/usr/lib64/python2.7/site-packages/gevent/queue.py", line 305, in peek
self.getters.discard(waiter)
AttributeError: 'collections.deque' object has no attribute 'discard'
<Greenlet at 0x2c53730: <bound method AWSLogs._raw_events_queue_consumer of <awslogs.core.AWSLogs object at 0x2b93f10>>> failed with AttributeError

You've found a bug! Please, raise an issue attaching the following traceback

https://github.com/jorgebastida/awslogs/issues/new

Version: 0.0.2
Python: 2.7.3 (default, Aug 9 2012, 17:23:57)
[GCC 4.7.1 20120720 (Red Hat 4.7.1-5)]
boto version: 2.38.0
Platform: Linux-3.11.10-100.fc18.x86_64-x86_64-with-fedora-18-Spherical_Cow
Config: {'output_group_enabled': True, 'end': None, 'aws_secret_access_key': 'SENSITIVE', 'log_stream_name': u'ALL', 'aws_region': u'us-east-1', 'watch': False, 'start': u'2h', 'log_group_name': u'/var/log/messages', 'func': 'list_logs', 'aws_access_key_id': 'SENSITIVE', 'color_enabled': True, 'output_stream_enabled': True}
Args: ['/bin/awslogs', 'get', '/var/log/messages', '-s', '2h', '--aws-region=us-east-1']

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/awslogs/bin.py", line 121, in main
getattr(logs, options.func)()
File "/usr/lib/python2.7/site-packages/awslogs/core.py", line 236, in list_logs
pool.join()
File "/usr/lib64/python2.7/site-packages/gevent/pool.py", line 461, in join
self._empty_event.wait(timeout=timeout)
File "/usr/lib64/python2.7/site-packages/gevent/event.py", line 86, in wait
result = self.hub.switch()
File "/usr/lib64/python2.7/site-packages/gevent/hub.py", line 510, in switch
return greenlet.switch(self)

LoopExit: ('This operation would block forever', <Hub at 0x2b17730 epoll default pending=0 ref=0 fileno=3 resolver=<gevent.resolver_thread.Resolver at 0x2c76e10 pool=<ThreadPool at 0x2cce710 0/8/10>> threadpool=<ThreadPool at 0x2cce710 0/8/10>>)

--watch option doesn't seem to watch

Running this command...

awslogs get --watch /var/log/php/php_errors.php ALL

runs fine and gives me the last 1200 odd log messages. The prompt then sits there waiting, but as logs are entering into aws cloudwatchlogs, there is no hint of them in the output of the above command.

TypeError parsing date.

Hi.

I've just noticed that passing Wednesday 27 January, 2016 20:09:58 UTC to --start and --end causes stack trace:

Traceback (most recent call last):
  File "/Users/lkostka/.virtualenvs/ansible/lib/python2.7/site-packages/awslogs/bin.py", line 138, in main
    logs = AWSLogs(**vars(options))
  File "/Users/lkostka/.virtualenvs/ansible/lib/python2.7/site-packages/awslogs/core.py", line 49, in __init__
    self.start = self.parse_datetime(kwargs.get('start'))
  File "/Users/lkostka/.virtualenvs/ansible/lib/python2.7/site-packages/awslogs/core.py", line 234, in parse_datetime
    return int(total_seconds(date - datetime(1970, 1, 1))) * 1000
TypeError: can't subtract offset-naive and offset-aware datetimes

Exact same string works in dateutil

In [1]: from dateutil.parser import parse

In [2]: parse("Wednesday 27 January, 2016 20:09:58 UTC")
Out[2]: datetime.datetime(2016, 1, 27, 20, 9, 58, tzinfo=tzutc())

IOError: [Errno 32] Broken pipe in --watch option

When i run awslogs with --watch option and keep it running, After some time i got below error,

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/threading.py", line 813, in __bootstrap_inner
    self.run()
  File "/usr/lib64/python2.7/threading.py", line 766, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 132, in consumer
    sys.stdout.flush()
IOError: [Errno 32] Broken pipe

Regards,
Chintan Patel

Problems to stop the command

Debian Jessie 64 bit, Python 2.7.9

Using awslogs==0.1.1:

$ pip freeze                                                                                                                                                          
argparse==1.2.1
awslogs==0.1.1
boto3==1.2.1
botocore==1.3.1
docutils==0.12
futures==2.2.0
jmespath==0.9.0
python-dateutil==2.4.2
six==1.10.0
termcolor==1.1.0
wsgiref==0.1.2

Calling get without any argumen (what is an error):

$ awslogs get                                                                                                                                                                                     (env: awslogs)
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/javl/.virtualenvs/awslogs/local/lib/python2.7/site-packages/awslogs/core.py", line 139, in generator
    response = self.client.filter_log_events(**kwargs)
  File "/home/javl/.virtualenvs/awslogs/local/lib/python2.7/site-packages/botocore/client.py", line 310, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/home/javl/.virtualenvs/awslogs/local/lib/python2.7/site-packages/botocore/client.py", line 395, in _make_api_call
    raise ClientError(parsed_response, operation_name)
ClientError: An error occurred (ResourceNotFoundException) when calling the FilterLogEvents operation: The specified log group does not exist.

and one cannot stop the program by Ctrl-C. I had to use killall awslogs

Similarly it behaves, when I use wrong group name.

Using proper group name and --watch option works, but there is also no
"clean" way to stop it by Ctrl-C.

It only reacts by:

^CYou pressed Ctrl+C!

but hangs untill I kill it.

Providing stream as '*' errors out

You've found a bug! Please, raise an issue attaching the following traceback

https://github.com/jorgebastida/awslogs/issues/new

Version: 0.7.0
Python: 3.5.2 (default, Aug 16 2016, 05:35:40)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
boto3 version: 1.4.0
Platform: Darwin-15.6.0-x86_64-i386-64bit
Config: {'log_group_name': '/nile/perso.nile.works/heliograf-election2016-admin', 'output_stream_enabled': True, 'aws_secret_access_key': 'SENSITIVE', 'output_timestamp_enabled': False, 'log_stream_name': '', 'aws_profile': 'SENSITIVE', 'aws_session_token': 'SENSITIVE', 'output_ingestion_time_enabled': False, 'aws_region': 'us-east-1', 'end': None, 'watch': True, 'func': 'list_logs', 'color_enabled': True, 'output_group_enabled': True, 'aws_access_key_id': 'SENSITIVE', 'query': None, 'filter_pattern': None, 'start': '10 min'}
Args: ['/Users/johria/.virtualenvs/heliograf/bin/awslogs', 'get', '/nile/perso.nile.works/heliograf-election2016-admin', '
', '--profile', 'main', '-s', '10 min', '--watch']

Traceback (most recent call last):
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/site-packages/awslogs/bin.py", line 166, in main
    getattr(logs, options.func)()
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/site-packages/awslogs/core.py", line 75, in list_logs
    streams = list(self._get_streams_from_pattern(self.log_group_name, self.log_stream_name))
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/site-packages/awslogs/core.py", line 67, in _get_streams_from_pattern
    reg = re.compile('^{0}'.format(pattern))
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/re.py", line 224, in compile
    return _compile(pattern, flags)
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/re.py", line 293, in _compile
    p = sre_compile.compile(pattern, flags)
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/sre_compile.py", line 536, in compile
    p = sre_parse.parse(p, flags)
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/sre_parse.py", line 829, in parse
    p = _parse_sub(source, pattern, 0)
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/sre_parse.py", line 437, in _parse_sub
    itemsappend(_parse(source, state))
  File "/Users/johria/.virtualenvs/heliograf/lib/python3.5/sre_parse.py", line 638, in _parse
    source.tell() - here + len(this))
# sre_constants.error: nothing to repeat at position 1

Add a means to specify region when using profile

This is an enhancement request to add a "--region" command-line option to complement the existing "--profile" one.

Right now, it is quite easy to run awslogs groups --profile production or awslogs groups --profile staging with appropriately configured ~/.aws/config and ~/.aws/credentials files (and delegated/federated AWS access credentials). However, if you have specified the default "region" alongside the profile stored in your ~/.aws/config file, there is no easy way to override the region when using the awslogs command line.

For example, your ~/.aws/config might contain:

# ...
[profile something]
# not really needed
region = ap-east-1

[profile staging]
source_profile = something
role_arn = arn:aws:iam::123456789012:role/whatever/staging/authentication
# merely specify the default region;  not the only region used in this account
region = eu-west-1

[profile production]
source_profile = something
role_arn = arn:aws:iam::456789012345:role/whatever/production/authentication
# merely specify the default region;  not the only region used in this account
region = us-west-1
# ...

... and with a corresponding ~/.aws/credentials file containing:

# ...
[something]
aws_access_key_id = ASDFGHJKQWERTYUI1234
aws_secret_access_key = werpg98y3q4p0r9ur984h/45easdfseru23WFWERW4
# ...

Users would then need to create, multiple profiles to access resources stored in the same AWS account but different regions (or use native "awscli" log commands instead).

Perhaps it would be possible to add a "--region" option to complement the existing "--profile" option.

Add --timestamp and --ingested-time options

Current version 0.2.0 allow printing log records, but does not allow printing ingestedTime and timestamp in separate columns.

When is printing timestamp and ingestedTime relevant

In many cases, log records do contain timestamp of the record as part of it's text, but there are good reasons to show ingestedTime and timestamp as recorded by AWS for following reasons:

  • log records do not have to contain any information about timestamp
  • the time reported by log record might be wrong (e.g. due to not synced time on the computer it was generated)
  • sometime it can be practical to compare timestamp as reported by log record and as parsed by AWS plus time it was ingested.

Proposed behaviour

By default, no timestamp and ingestedTime columns are printed.

Add options --timestamp and --ingested-time. If used, timestamp and ingestedTime columns are printed.

The order of columns shall be:

  • group
  • stream
  • timestamp
  • ingestedTime
  • log record text

This order has the advantage, that columns with predictable width and delimiters are printed first and variable length part comes at the end. It shall simplify processing of the awslogs output by other tools.

Both times shall be expressed in RFC3339 format in UTC, this shall ensure, the output is not dependent on where is the command run. As both times are expressed with miliseconds, the datetime might look like 2016-01-19T22:03:36.123Z

Conclusions

There exists PR #26, which is getting close, but would have to be modified to meet the requirements mentioned above, namely:

  • allow output of ingestedTime

rewrite tests to `pytest`

This is proposal, not a bug.

Currently, the only way to run the test I am aware of is by $ python setup.py test.

I do not know, if one can run the test selecting only specific test (what would speed up development of particular features).

Rewrite to pytest would result in:

  • moving all existing tests into tests folder (preserve all existing test cases)
  • more modular test suite code
  • nicer problem report in case of failure (nicely coloured, more expressive)
  • easy option to discover existing tests without running them
  • easy option to run just particular test
  • allow coverage report

I am asking for an opinion before I dive into it as it takes few hours of work.

Credentials Fail : NoAuthHandlerFound: No handler was ready to authenticate.

Trying to use awslogs for the first time, already had aws-cli setup with credentials but get this error:

Version: 0.0.1
Python: 2.7.8 (default, Sep 26 2014, 11:28:16)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)]
boto version: 2.38.0
Platform: Darwin-14.3.0-x86_64-i386-64bit
Config: {'color_enabled': True, 'aws_region': u'eu-west-1', 'aws_access_key_id': 'SENSITIVE', 'aws_secret_access_key': 'SENSITIVE', 'func': 'list_groups'}
Args: ['/usr/local/bin/awslogs', 'groups']

Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/awslogs/bin.py", line 115, in main
logs = AWSLogs(**vars(options))
File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 83, in init
aws_secret_access_key=self.aws_secret_access_key
File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 27, in init
self.connection = botologs.connect_to_region(_args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/boto/logs/init.py", line 40, in connect_to_region
return region.connect(**kw_params)
File "/usr/local/lib/python2.7/site-packages/boto/regioninfo.py", line 187, in connect
return self.connection_cls(region=self, **kw_params)
File "/usr/local/lib/python2.7/site-packages/boto/logs/layer1.py", line 107, in init
super(CloudWatchLogsConnection, self).init(**kwargs)
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 1100, in init
provider=provider)
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 569, in init
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/site-packages/boto/auth.py", line 987, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials

Should catch EPIPE on standard output

When running awslogs get $GROUP $STREAM | head, the EPIPE exception is uncaught:

================================================================================
You've found a bug! Please, raise an issue attaching the following traceback
https://github.com/jorgebastida/awslogs/issues/new
--------------------------------------------------------------------------------
Version: 0.7.0
Python: 2.7.12 (default, Jun 29 2016, 14:05:02) 
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
boto3 version: 1.2.3
Platform: Darwin-15.6.0-x86_64-i386-64bit
Config: {'output_timestamp_enabled': False, 'output_group_enabled': True, 'end': None, 'log_group_name': 'xxxxxxxx', 'log_stream_name': 'xxxxxxxx', 'aws_region': None, 'watch': False, 'aws_access_key_id': 'SENSITIVE', 'start': '5m', 'aws_profile': 'SENSITIVE', 'filter_pattern': None, 'aws_secret_access_key': 'SENSITIVE', 'output_ingestion_time_enabled': False, 'query': None, 'func': 'list_logs', 'aws_session_token': 'SENSITIVE', 'color_enabled': True, 'output_stream_enabled': True}
Args: ['/usr/local/bin/awslogs', 'get', 'xxxxxx', 'yyyyy]

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/awslogs/bin.py", line 166, in main
    getattr(logs, options.func)()
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 187, in list_logs
    consumer()
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 185, in consumer
    sys.stdout.flush()
IOError: [Errno 32] Broken pipe

"Broken pipe" errors on standard output should usually be caught and ignored.

awslogs streams <group> displays nothing in some situations

If you have a log group that contains streams but those streams were not written to after current_time+5m as calculated by parse_datetime(), the streams command will not display them. I would expect awslogs streams <group> to display all the streams inside the group regardless of when they were last written to.

Support `--start-from-head` option

Allow getting just the latest log events by supporting --start-from-head option. By default (without that option), we would start getting only very latest log records ("tailing")

This is not so simple as it sounds (conflicting options, multiple scenarios...)

Implementation

Using missing startFromHead parameter in filter_log_events

CloudWatchLogs.Client.get_log_events supports parameter startFromHead

Unfortunately, in code we use CloudWatchLogs.Client.get_log_events which does not support such option.

I would propose to keep using get_log_events method and use startTime with current time

Client having not synced clock

Using current time can fail, if use would have bad time synchronization (being ahead of time would cause long period of silence).

Options are:

  • ignore such scenario, declare time being properly synced as prerequisite
  • get current time from some "live" request to AWS
  • make one initial call to get_log_events with startFromHead being False and use timestamp from latest record as time for our --start. (did not test it if it returns any record at all if there is no new record added at that moment).
  • after we do the call to filter_log_events, get the current time from HTTP headers of that request (I am not aware of simple method to do so) and if it would differ more than 30 seconds from current one, print warning to stderr and continue.

Ignoring seems fine to me.

Conflicting --start and --start-from-head options

These two options are mutually exclusive. In case they are both set by user, we have following options:

  • use --start and ignore --start-from-head, possibly printing some warning to stderr
  • reject such combination of options and exit

Exiting seems better to me (we will force user to think, what he really wants to get).

--start-from-head being off by default

When there is no --start value, awslogs shall start from tail of log. This seems to me intuitive behaviour (user wanting to see, what is happening just now). This fits very well with --watch, where in many cases large logs would take too long time to show current records, which user probably cares about.

PS: Did I tell you I like the tool a lot?

Consider rewrite of command line interface to Click

I am considering rewrite of command line UI into click. We could benefit from it in multiple places:

Simpler testing (no mockup for stdout/stderr needed)

Click provides option to write test cases, capturing stdout without a need to mockup stdout stream.

Our test cases would get simpler and would probably has less issues with stdout differences at python 2 and 3.

I had recently few issues with existing stdout mockup so I skipped some test cases, which would be handy.

Support for subcommands

We have three subcommands. Current implementation uses argparse, which allows subcommands too, but is not very clear and easy.

Issue #45 (Abort in case of unknown option) would be resolved by that.

Safer output with python 2 and 3 with click.echo

Using click.echo we could prevent some nasty problems with python 2 and 3 and console encodings. click.echo is like print, but does it's best to prevent differences in both pythons.

It is likely, it would resolve the issue #44 (Exception Unicode...)

Configuration from files or env variables

It would be very easy to have options configurable from env variables. Support for application configuration file is also present.

Color output with autodetection if console is color capable

We can output with colors. This might be an issue on terminals, which are not color capable.

Click allows autodetection of terminals being unable to handle colors.

Bash autocompletion

Click allows generation of autocompletion script for bash (no other shells supported at the moment).

Easy support for paging long outputs

Our output can get multiple pages long. It is easy to support paging where pagers are available.

Following is short test, how it works:

import click                                                                                       


def gen_lines(count, suffix):                                                                      
    tmpl = "{i}: {suffix}"                                                                         
    for i in range(count):                                                                         
        yield tmpl.format(i=i, suffix=suffix)                                                      


@click.command()                                                                                   
@click.option("--count", default=300)                                                              
@click.option("--suffix", default="some more line content")                                        
def page(count, suffix):                                                                           
    """Command printing multiple pages of some text. Testing paging"""                             
    click.echo_via_pager("\n".join(gen_lines(count, suffix)))                                      


if __name__ == "__main__":
    page() 

Conclusions

I feel, we could benefit from converting CLI from argparse into click.

In case there is chance to get such pull request accepted, I can write it.

The plan is as follows:

  • convert all CLI from argparse to click (preserve existing features)
  • rewrite test cases to use click test case support.
    • resolve issue #45 at the same time
    • modify all print into click.echo. Make sure, we get issue #44 resolved

This shall conclude this issue.

Following features should be left for further modifications:

  • configure command by env. variables
  • configure command by application configuration file
  • terminal color support autodetection
  • pager support
  • bash autocompletion

Let me know, if there is chance for such PR to be accepted.

Use credentials profiles

It would be nice to use credential profiles as in aws cli:

aws s3 .. --profile company1
aws s3 .. --profile company2

it seems it's not actually possible

Not able to use awslogs due to pkg_resources.DistributionNotFound: gevent>=1.0

For some reason, awslogs always throws the following error upon awslogs or any other command

Traceback (most recent call last):
  File "/usr/local/bin/awslogs", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2707, in <module>
    working_set.require(__requires__)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 686, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 584, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: gevent>=1.0

Any idea on what could be wrong? This tool looks super awesome and I'd love to be able to use it :)

(Configured ~/.aws using awscli)

Get all / multiple groups

The synopsis for awslog get says:

awslogs get [GROUP [STREAM_EXPRESSION]]

But GROUP is not optional.

It would be nice, if awslogs get would retrieve all streams in all groups - allowing for awslogs get --watch to watch everything.

Additionally GROUP could also allow for (regular) expressions to specify multiple, but not all of them.

Error getting logs

I've this exception:

$ awslogs  awslogs get /aws/lambda/myfunction ALL -s1d
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/gevent/greenlet.py", line 519, in run
    result = self._run(*self.args, **self.kwargs)
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 186, in _raw_events_queue_consumer
    timestamp, line = self.raw_events_queue.peek(timeout=1)
  File "/usr/local/lib/python2.7/site-packages/gevent/queue.py", line 305, in peek
    self.getters.discard(waiter)
AttributeError: 'collections.deque' object has no attribute 'discard'
<Greenlet at 0x1087fab90: <bound method AWSLogs._raw_events_queue_consumer of <awslogs.core.AWSLogs object at 0x10875b150>>> failed with AttributeError


================================================================================
You've found a bug! Please, raise an issue attaching the following traceback
https://github.com/jorgebastida/awslogs/issues/new
--------------------------------------------------------------------------------
Version: 0.0.3
Python: 2.7.10 (default, Jul 20 2015, 10:27:34)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.57.2)]
boto version: 2.38.0
Platform: Darwin-15.0.0-x86_64-i386-64bit
Config: {'output_group_enabled': True, 'end': None, 'log_group_name': u'/aws/lambda/myfunction', 'log_stream_name': u'ALL', 'watch': False, 'aws_region': u'eu-west-1', 'start': u'1d', 'aws_secret_access_key': 'SENSITIVE', 'func': 'list_logs', 'aws_access_key_id': 'SENSITIVE', 'color_enabled': True, 'output_stream_enabled': True}
Args: ['/usr/local/bin/awslogs', 'get', '/aws/lambda/myfunction', 'ALL', '-s1d']

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/awslogs/bin.py", line 125, in main
    getattr(logs, options.func)()
  File "/usr/local/lib/python2.7/site-packages/awslogs/core.py", line 237, in list_logs
    pool.join()
  File "/usr/local/lib/python2.7/site-packages/gevent/pool.py", line 461, in join
    self._empty_event.wait(timeout=timeout)
  File "/usr/local/lib/python2.7/site-packages/gevent/event.py", line 86, in wait
    result = self.hub.switch()
  File "/usr/local/lib/python2.7/site-packages/gevent/hub.py", line 510, in switch
    return greenlet.switch(self)
LoopExit: ('This operation would block forever', <Hub at 0x1087034b0 select default pending=0 ref=0 resolver=<gevent.resolver_thread.Resolver at 0x108835e50 pool=<ThreadPool at 0x108840e10 0/8/10>> threadpool=<ThreadPool at 0x108840e10 0/8/10>>)
================================================================================

Crash when attempting get:

The command:

awslogs --aws-region us-east-1 get qaCatcher --start='10m ago' | grep ERROR

The output:

You've found a bug! Please, raise an issue attaching the following traceback

https://github.com/jorgebastida/awslogs/issues/new

Version: 0.0.1
Python: 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2]
boto version: 2.38.0
Platform: Linux-3.13.0-49-generic-x86_64-with-Ubuntu-14.04-trusty
Config: {'output_group_enabled': True, 'end': None, 'aws_secret_access_key': 'SENSITIVE', 'log_stream_name': u'ALL', 'watch': False, 'aws_region': u'us-east-1', 'start': u'1h ago', 'log_group_name': u'qaCatcher', 'func': 'list_logs', 'aws_access_key_id': 'SENSITIVE', 'color_enabled': True, 'output_stream_enabled': True}
Args: ['/usr/local/bin/awslogs', '--aws-region', 'us-east-1', 'get', 'qaCatcher', '--start=1h ago']

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/awslogs/bin.py", line 116, in main
getattr(logs, options.func)()
File "/usr/local/lib/python2.7/dist-packages/awslogs/core.py", line 224, in list_logs
pool.join()
File "/usr/local/lib/python2.7/dist-packages/gevent/pool.py", line 100, in join
self._empty_event.wait(timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/gevent/event.py", line 77, in wait
result = self.hub.switch()
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 331, in switch
return greenlet.switch(self)
LoopExit: This operation would block forever

rewrite to use `tox`

Currently, tests are run by $ python setup.py test.

To run this test in multiple python versions requires manually switching the python version for each test or wait, until Travis does the work.

Rewriting test to tox would allow starting test in all locally available python versions by simple $ tox call.

The rewrite should keep all existing functionality, namely:

  • run all existing test cases
  • run test coverage evaluation

output has extra newline after each line

I just installed awslogs and it's great!

but the output has an extra newline after each line of output. This isn't expected behavior is it?

This is probably a misconfiguration of something on my end, but I can't figure it out. What could be causing this?

/aws/lambda/sign_up_event 2016/04/27/[$LATEST]f72216a09782456ba55e1d7b90ff3c45 START RequestId: 45a0ec89-0ca5-11e6-8867-81e1d126634d Version: $LATEST

/aws/lambda/sign_up_event 2016/04/27/[$LATEST]f72216a09782456ba55e1d7b90ff3c45 Received event: {

...event data...

/aws/lambda/sign_up_event 2016/04/27/[$LATEST]f72216a09782456ba55e1d7b90ff3c45 }

/aws/lambda/sign_up_event 2016/04/27/[$LATEST]f72216a09782456ba55e1d7b90ff3c45 END RequestId: 45a0ec89-0ca5-11e6-8867-81e1d126634d

/aws/lambda/sign_up_event 2016/04/27/[$LATEST]f72216a09782456ba55e1d7b90ff3c45 REPORT RequestId: 45a0ec89-0ca5-11e6-8867-81e1d126634d   Duration: 4046.79 ms    Billed Duration: 4100 ms    Memory Size: 128 MB Max Memory Used: 22 MB  

refactor the code to follow PEP008

On some place are lines exceeding 80 characters what is not following PEP008.

On other places there are variables or imports declared but not used.

Refactoring shall not be difficult and shall make the code easier to maintain in future.

Anyway, I did not dare to do so at the moment there were pending pull request, as it would require authors to update their contributions.

Access Denied Exception should show be a handled error

Currently is showing:

Version: 0.0.1
Python: 2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
boto version: 2.38.0
Platform: Darwin-13.4.0-x86_64-i386-64bit
Config: {'output_group_enabled': True, 'end': None, 'aws_secret_access_key': 'XXXX', 'log_stream_name': u'ALL', 'watch': False, 'aws_region': u'eu-west-1', 'start': u'2w ago', 'log_group_name': u'XXXX', 'func': 'list_logs', 'aws_access_key_id': 'XXXX', 'color_enabled': True, 'output_stream_enabled': True}
Args: ['/usr/local/bin/awslogs', 'get', 'XXXX', '--start=2w ago']

Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/awslogs/bin.py", line 116, in main
getattr(logs, options.func)()
File "/Library/Python/2.7/site-packages/awslogs/core.py", line 213, in list_logs
self.register_publishers()
File "/Library/Python/2.7/site-packages/awslogs/core.py", line 228, in register_publishers
for group, stream in self._get_streams_from_patterns(self.log_group_name, self.log_stream_name):
File "/Library/Python/2.7/site-packages/awslogs/core.py", line 89, in _get_streams_from_patterns
for group in self._get_groups_from_pattern(log_group_pattern):
File "/Library/Python/2.7/site-packages/awslogs/core.py", line 98, in _get_groups_from_pattern
for group in self.get_groups():
File "/Library/Python/2.7/site-packages/awslogs/core.py", line 257, in get_groups
response = self.connection.describe_log_groups(next_token=next_token)
File "/Library/Python/2.7/site-packages/awslogs/core.py", line 39, in aws_connection_wrap
return getattr(self.connection, name)(_args, *_kwargs)
File "/Library/Python/2.7/site-packages/boto/logs/layer1.py", line 268, in describe_log_groups
body=json.dumps(params))
File "/Library/Python/2.7/site-packages/boto/logs/layer1.py", line 576, in make_request
body=json_body)
JSONResponseError: JSONResponseError: 400 Bad Request
{u'Message': u'User: XXXX is not authorized to perform: logs:DescribeLogGroups on resource: XXXX', u'__type': u'AccessDeniedException'}

Tag releases

I would like to package this for Archlinux, can you start tagging the stable releases commits?

cheers

Exception: UnicodeEncodeError

I fired awslogs get my.log -s24h and the following happened

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/pwilczynski/.virtualenvs/ansible/local/lib/python2.7/site-packages/awslogs/core.py", line 111, in consumer
    print(' '.join(output))
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2202' in position 224: ordinal not in range(128)

Error listing streams

awslogs streams /var/log/syslog
<<list of some streams>>

================================================================================
You've found a bug! Please, raise an issue attaching the following traceback
https://github.com/jorgebastida/awslogs/issues/new
--------------------------------------------------------------------------------
Version: 0.1.2
Python: 2.7.10 (default, Aug 22 2015, 20:33:39)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.1)]
boto3 version: 1.2.2
Platform: Darwin-15.0.0-x86_64-i386-64bit
Config: {'end': None, 'log_group_name': '/var/log/syslog', 'aws_region': None, 'aws_access_key_id': 'SENSITIVE', 'start': '24h', 'aws_secret_access_key': 'SENSITIVE', 'func': 'list_streams', 'aws_session_token': 'SENSITIVE'}
Args: ['/Users/omarkj/Library/Python/2.7/bin/awslogs', 'streams', '/var/log/syslog']

Traceback (most recent call last):
  File "/Users/omarkj/Library/Python/2.7/lib/python/site-packages/awslogs/bin.py", line 121, in main
    getattr(logs, options.func)()
  File "/Users/omarkj/Library/Python/2.7/lib/python/site-packages/awslogs/core.py", line 178, in list_streams
    for stream in self.get_streams():
  File "/Users/omarkj/Library/Python/2.7/lib/python/site-packages/awslogs/core.py", line 197, in get_streams
    if max(stream['firstEventTimestamp'], window_start) <= \
KeyError: 'firstEventTimestamp'
================================================================================

Consider using `placebo` for mocking AWS calls

There is a package (by Mitch Garnaat - initial author of boto) named placebo. It claims:

Placebo allows you to mock boto3 calls that look just like normal calls but actually have no effect at all. It does this by allowing you to record a set of calls and save them to a data file and then replay those calls later (e.g. in a unit test) without ever hitting the AWS endpoints.

To me it looks like good candidate for replacing our current manually made AWS responses which are not complete in the content.

Thinking of possible results

  • using placebo, we once record calls to awslogs AWS service, save it and use later on in unit tests.
  • our unittest would use placebo to create recorded AWS responses directly by "sending log records" to AWS. Recorded calls (and responses) would be used in unittest. Having this done with each test run would make much easier to modify log records and see they are really present.
  • using placebo recorded calls we would create a fixture, which would build "recorded" responses from log records. This is somewhere in the half way between previous two options.

In all cases, we would be using some (really recorded or manually shaped) recorded calls/responses in our tests. The replay process would probably use placebo.

Proposed steps

  • initial exploration of placebo - does it work at all? How it records and how it replays calls.
  • explore placebo for awslogs related calls. Does it meet our expectations? Does it work in this case at all?
  • decide how to use placebo.

If all goes well, we could continue:

  • replace existing in-line generated AWS responses by placebo.pill
  • consider generating "recorded calls" at setup/fixture creation time to allow testing input log records to output log records.

rewrite setup.py to `pbr` and `setup.cfg`

This is a proposal, not a bug.

Existing setup.py is already using pbr, probably to incorporate test command.

If we rewrite the setup.py into pbr, it would result in:

  • all existing functionality still present
  • 3-lines long setup.py
  • all metadata moved into setup.cfg
  • version being derived from git tag (we may decide for other method, like having version hard coded in this file or in external one)
  • autogenerated ChangeLog based on commit messages (if we want that)
  • autogenerated AUTHORS from commit messages.
  • requirements.txt being externalized from setup.py

Apart from cleaning the code, we could resolve the issue #37 by that.

Issue installing with pip

Hello! I cannot get your script (version 0.1.0) to install using pip. There appears to be a dependency conflict issue on botocore. Here's what I get when I try to install:

ᐅ pip install awslogs
Collecting awslogs
  Using cached awslogs-0.1.0.tar.gz
Collecting jmespath==0.7.1 (from awslogs)
  Using cached jmespath-0.7.1-py2.py3-none-any.whl
Collecting botocore<=1.2.3 (from awslogs)
  Using cached botocore-1.2.3-py2.py3-none-any.whl
Collecting boto3>1.0.0 (from awslogs)
  Using cached boto3-1.2.1-py2.py3-none-any.whl
Collecting termcolor>=1.1.0 (from awslogs)
  Using cached termcolor-1.1.0.tar.gz
Collecting python-dateutil>=2.4.0 (from awslogs)
  Using cached python_dateutil-2.4.2-py2.py3-none-any.whl
Collecting docutils>=0.10 (from botocore<=1.2.3->awslogs)
  Using cached docutils-0.12-py3-none-any.whl
Collecting six>=1.5 (from python-dateutil>=2.4.0->awslogs)
  Using cached six-1.10.0-py2.py3-none-any.whl
Installing collected packages: six, docutils, python-dateutil, termcolor, boto3, botocore, jmespath, awslogs



  Running setup.py install for termcolor


  Found existing installation: jmespath 0.9.0
    Uninstalling jmespath-0.9.0:
      Successfully uninstalled jmespath-0.9.0

  Running setup.py install for awslogs
    Installing awslogs script to /Users/abrown/.virtualenvs/consilio/bin
Successfully installed awslogs-0.1.0 boto3-1.2.1 botocore-1.2.3 docutils-0.12 jmespath-0.7.1 python-dateutil-2.4.2 six-1.10.0 termcolor-1.1.0

ᐅ awslogs
Traceback (most recent call last):
  File "/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages/pkg_resources/__init__.py", line 612, in _build_master
    ws.require(__requires__)
  File "/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages/pkg_resources/__init__.py", line 918, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages/pkg_resources/__init__.py", line 810, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (botocore 1.2.3 (/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages), Requirement.parse('botocore<1.4.0,>=1.3.0'), {'boto3'})

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/abrown/.virtualenvs/python34/bin/awslogs", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages/pkg_resources/__init__.py", line 3018, in <module>
    working_set = WorkingSet._build_master()
  File "/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages/pkg_resources/__init__.py", line 614, in _build_master
    return cls._build_from_requirements(__requires__)
  File "/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages/pkg_resources/__init__.py", line 627, in _build_from_requirements
    dists = ws.resolve(reqs, Environment())
  File "/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages/pkg_resources/__init__.py", line 810, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (botocore 1.2.3 (/Users/abrown/.virtualenvs/python34/lib/python3.4/site-packages), Requirement.parse('botocore<1.4.0,>=1.3.0'), {'boto3'})

UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 319:

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gevent/greenlet.py", line 327, in run
result = self._run(_self.args, *_self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/awslogs/core.py", line 213, in _raw_events_queue_consumer
self.events_queue.put("{0}\n".format(' '.join(output)))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 319: ordinal not in range(128)
<Greenlet at 0x7f3344135050: <bound method AWSLogs._raw_events_queue_consumer of <awslogs.core.AWSLogs object at 0x7f33441a1790>>> failed with UnicodeEncodeError

You've found a bug! Please, raise an issue attaching the following traceback

https://github.com/jorgebastida/awslogs/issues/new

Version: 0.0.2
Python: 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2]
boto version: 2.38.0
Platform: Linux-3.13.0-65-generic-x86_64-with-Ubuntu-14.04-trusty
Config: {'output_group_enabled': True, 'end': None, 'aws_secret_access_key': 'SENSITIVE', 'log_stream_name': u'ALL', 'aws_region': u'eu-west-1', 'watch': False, 'start': u'1 days', 'log_group_name': u'bespin-prod', 'func': 'list_logs', 'aws_access_key_id': 'SENSITIVE', 'color_enabled': True, 'output_stream_enabled': True}
Args: ['/usr/local/bin/awslogs', 'get', 'bespin-prod', '--start=1 days']

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/awslogs/bin.py", line 121, in main
getattr(logs, options.func)()
File "/usr/local/lib/python2.7/dist-packages/awslogs/core.py", line 236, in list_logs
pool.join()
File "/usr/local/lib/python2.7/dist-packages/gevent/pool.py", line 100, in join
self._empty_event.wait(timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/gevent/event.py", line 77, in wait
result = self.hub.switch()
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 338, in switch
return greenlet.switch(self)

LoopExit: This operation would block forever

Does not support large amounts of streams?

Hi,

Awslogs seems like a very promising tool. It's the first I've found that handles throttling.

I tried awslogs LOGGROUP ALL --watch with a log group that contains thousands of streams. In this case awslogs does not seem to return any results. It would seem like that awslogs tries to fetch all streams which will take a very long time. AWS Lambda is a service that floods log groups with streams.

Cloudwatch Logs API supports OrderBy parameter [1]. This would allow fetching of updated streams. Unfortunately Boto does not allow using of this parameter.

Would you have any suggestions on how to fine tune log stream fetching for this use case?

-mfonsen
[1] http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeLogStreams.html

Abort in case of unknown option

awslogs get loggroup --non-existing-option should abort with an error that the --non-existing-option is not known.

This would help with misspelled options being silently ignored.

Why we have `pbr` in `tests_require` in `setup.py`

@jorgebastida

on line 19 of setup.py https://github.com/jorgebastida/awslogs/blob/master/setup.py#L19 there is:

if sys.version_info < (3, 3):
    tests_require.append('pbr>=0.11,<1.7.0')
    tests_require.append('mock>=1.0.0')

I wonder

  • why we have it there. I did not find any use of it even in tests
  • do we have any experience with pbr version 1.8.0 not being usable?

Originally I thought, this piece of code will become obsolete by rewriting the setup.py to use pbr, see issue #38, but working on it, I hit strange problems. One of findings is, that pbr has version 1.8.0 but even pypi reports the latest version being 0.11.0. On launchpad I have found some notes about problems with version 1.8.0 due to conflicts with Sphinx and planned to be blacklisted. See https://bugs.launchpad.net/pbr/+bug/1496882

Currently I have setup.py rewritten to use pbr, and tox testing in place, but when run, it rejects installing awslogs dependencies. Any insight into this pbr related problem is welcome.

Using of --start frequently causes crashes

Using any input for --start causing weird crashes like this

================================================================================
You've found a bug! Please, raise an issue attaching the following traceback
https://github.com/jorgebastida/awslogs/issues/new
--------------------------------------------------------------------------------
Version: 0.0.2
Python: 2.7.3 (default, Dec 18 2014, 19:10:20) 
[GCC 4.6.3]
boto version: 2.38.0
Platform: Linux-3.2.0-23-generic-x86_64-with-Ubuntu-12.04-precise
Config: {'output_group_enabled': True, 'end': None, 'aws_secret_access_key': 'SENSITIVE', 'log_stream_name': u'sidekiq-ip-10-158-29-134-i-bb7a6f4a', 'aws_region': u'us-east-1', 'watch': False, 'start': u'1m', 'log_group_name': u'services', 'func': 'list_logs', 'aws_access_key_id': 'SENSITIVE', 'color_enabled': True, 'output_stream_enabled': True}
Args: ['/usr/local/bin/awslogs', 'get', 'services', 'sidekiq-ip-10-158-29-134-i-bb7a6f4a', '--start=1m']

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/awslogs/bin.py", line 121, in main
    getattr(logs, options.func)()
  File "/usr/local/lib/python2.7/dist-packages/awslogs/core.py", line 236, in list_logs
    pool.join()
  File "/usr/local/lib/python2.7/dist-packages/gevent/pool.py", line 100, in join
    self._empty_event.wait(timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/gevent/event.py", line 77, in wait
    result = self.hub.switch()
  File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 338, in switch
    return greenlet.switch(self)
LoopExit: This operation would block forever
================================================================================

in this particular instance I did

awslogs get services someserverthing --start='1m'

Any advice? :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.