litl / backoff Goto Github PK
View Code? Open in Web Editor NEWPython library providing function decorators for configurable backoff and retry
License: MIT License
Python library providing function decorators for configurable backoff and retry
License: MIT License
It would be nice if the current exception being handled in on_exception
function could pass the exception's instance to the on_giveup
handler.
Also i've noticed that there's only an "empty" raise statement after giving up on retrying. Wouldn't it be correct to reraise the current exception's instance itself?
**When StopIteration
is raised the exception being handled when calling the decorated function is reraised.
Snippet with the code i'm talking about:
Lines 100 to 108 in 229d30a
Hello! My organization is using backoff version 1.8.0, and would benefit from having log messages pass variable data in as arguments (e.g. logger.error("Foo: %s", "bar")
) rather than logging fully formatted strings (e.g. logger.error("Foo: {}".format("bar"))
). This style of logging is recommended by the python docs because it allows log sinks to process log messages in a more meaningful way. Specifically, my organization uses Sentry, which will understand that multiple calls with the same log message but different arguments represent different incidents of the same issue, but can't recognize logs as related to the same issue if arguments are pre-formatted before the string is logged. I am happy to make a pull request for this if you would like.
Look at this simple example and notice the waiting times does not increase as expected:
In [23]: def backoff_hdlr(details):
...: print ("Backing off {wait:0.1f} seconds afters {tries} tries "
...: "calling function {target} with args {args} and kwargs "
...: "{kwargs}".format(**details))
...:
...: @backoff.on_exception(backoff.expo, ValueError,on_backoff=backoff_hdlr)
...: @backoff.on_exception(backoff.expo, TypeError,on_backoff=backoff_hdlr)
...: def get_url(url):
...: raise ValueError
...:
...: get_url("")
Backing off 0.8 seconds afters 1 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 1.3 seconds afters 2 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 3.2 seconds afters 3 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 0.5 seconds afters 4 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 2.6 seconds afters 5 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 3.7 seconds afters 6 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 2.3 seconds afters 7 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 90.7 seconds afters 8 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 175.3 seconds afters 9 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 146.8 seconds afters 10 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
Backing off 479.5 seconds afters 11 tries calling function <function get_url at 0x10a7e2048> with args ('',) and kwargs {}
(see the 0.5 seconds in the 4th log)
If we just leave one exponential backoff now behaves as expected:
In [25]: def backoff_hdlr(details):
...: print ("Backing off {wait:0.1f} seconds afters {tries} tries "
...: "calling function {target} with args {args} and kwargs "
...: "{kwargs}".format(**details))
...:
...: @backoff.on_exception(backoff.expo, ValueError,on_backoff=backoff_hdlr)
...: def get_url(url):
...: raise ValueError
...:
...: get_url("")
Backing off 0.2 seconds afters 1 tries calling function <function get_url at 0x10a32d620> with args ('',) and kwargs {}
Backing off 0.2 seconds afters 2 tries calling function <function get_url at 0x10a32d620> with args ('',) and kwargs {}
Backing off 2.1 seconds afters 3 tries calling function <function get_url at 0x10a32d620> with args ('',) and kwargs {}
Backing off 4.8 seconds afters 4 tries calling function <function get_url at 0x10a32d620> with args ('',) and kwargs {}
Backing off 10.5 seconds afters 5 tries calling function <function get_url at 0x10a32d620> with args ('',) and kwargs {}
Tested with backoff ==1.8.1
and saw no relevant changes in the changelog since then.
Hello,
I have a function that ingests records to some stream.
The stream may succeed although accepted only a strict subset of the records list.
I would like to modify the records list before it is processed again by the backoff mechanism (when using the on_predicat decorator).
In particular, I would like to filter the records that were not accepted and have them processed again.
How would I do that?
I am calling some functions of a lib i wanna backoff.
Problem is, I cannot modify the lib.
import backoff from requests.exceptions import TimeoutError from somewhere import library
So I am looking for something like:
with backoff.on_exception(backoff.expo, TimeoutError, max_tries=8):
library.some_function(yada, blah, foo, bar)
# end if
or
while backoff.on_exception(backoff.expo, TimeoutError, max_tries=8):
library.some_function(yada, blah, foo, bar)
# end if
Not sure what is possible to do.
Edit (2017-04-27): Calling just one function can be solved like seen below
Suppose I have a function that sends a HTTP request with a token that is to be refreshed every N minutes that is not under my control. So I wrap the token fetching request into a separate closure that can be called whenever needed.
def token_fetcher():
token = None
def _fetcher(renew):
nonlocal token
if token is None or renew:
token = do_something_to_fetch_the_token()
return token
return _fetcher
and my actual function doing HTTP request
def do_http_request(query, token_fetcher, renew=False):
return requests.get('http://example.com/request', param=query, header=token_fetcher(renew))
So is it possible to make backoff to flip renew
into True
if backoff happens? (second trial onwards)
I wanted to see if there is interest in an API addition to allow users to determine wait time based on exception or return value from the decorated function.
One use case being: You're sending requests to a rate limited API. API blocks your request but is nice enough to include Retry-After
header in the response.
Retry-After
contains the exact number of seconds you should wait before making another request.
If I could access the exception or response somehow before wait time is decided, I could check for this header and if present use its value, otherwise fall back on a wait_gen
Using any kind of calculated time function is very hit and miss in this scenario and mostly wasteful.
I'm thinking about adding an optional wait_override
argument to the decorators. This would be a function that gets passed the exception (for backoff.on_exception
) or the retry-able's return value (for backoff.on_predicate
).
So that users can do something like this:
def my_override(exception):
headers = getattr(e, 'headers', {})
seconds = headers.get('Retry-After')
# If None is returned, wait_gen takes over
return int(seconds) if seconds else None
@backoff.on_exception(backoff.expo, HTTPError, wait_override=my_override)
def request_thing(url):
response = do_request(...)
response.raise_for_status()
return response
See related discussion in #16
Full jitter is great, but it can mean that whatever you're polling gets significantly less time than you might expect to be ready.
It would also be great to be able to say something along the lines of "try for 5 minutes, backing off exponentially".
I have a use case for a 'giveup' function to work conditionally, based on the arguments passed to the function to be retried.
I'm trying to use backoff
with a login function that connects to an external server. The code looks something like this.
@backoff.on_exception(backoff.expo, FailedConnection)
def login(username, password):
# stuff
When on_backoff
triggers, the default handler is invoked which logs the signature which includes the password in this case.
It'd be great to have a way to disable/override the default handlers to provide my own logging behavior.
I own the Arch Linux AUR python-backoff
package and was hoping you might be able to help me figure out why it doesn't build for python2 (its fine for python 3).
==> Starting package_python2-backoff()...
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'python_requires'
warnings.warn(msg)
running install
running build
running build_py
file README.py (for module README) not found
file LICENSE.py (for module LICENSE) not found
creating build
creating build/lib
creating build/lib/backoff
copying backoff/_decorator.py -> build/lib/backoff
copying backoff/_wait_gen.py -> build/lib/backoff
copying backoff/_async.py -> build/lib/backoff
copying backoff/_common.py -> build/lib/backoff
copying backoff/__init__.py -> build/lib/backoff
copying backoff/_jitter.py -> build/lib/backoff
copying backoff/_sync.py -> build/lib/backoff
file README.py (for module README) not found
file LICENSE.py (for module LICENSE) not found
running install_lib
creating /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr
creating /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib
creating /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7
creating /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages
creating /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
copying build/lib/backoff/_sync.py -> /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
copying build/lib/backoff/__init__.py -> /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
copying build/lib/backoff/_common.py -> /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
copying build/lib/backoff/_jitter.py -> /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
copying build/lib/backoff/_async.py -> /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
copying build/lib/backoff/_decorator.py -> /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
copying build/lib/backoff/_wait_gen.py -> /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff
byte-compiling /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff/_sync.py to _sync.pyc
byte-compiling /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff/__init__.py to __init__.pyc
byte-compiling /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff/_common.py to _common.pyc
byte-compiling /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff/_jitter.py to _jitter.pyc
byte-compiling /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff/_async.py to _async.pyc
File "/usr/lib/python2.7/site-packages/backoff/_async.py", line 22
async def _call_handlers(hdlrs, target, args, kwargs, tries, elapsed, **extra):
^
SyntaxError: invalid syntax
byte-compiling /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff/_decorator.py to _decorator.pyc
byte-compiling /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff/_wait_gen.py to _wait_gen.pyc
writing byte-compilation script '/tmp/tmpuUGdlI.py'
/usr/bin/python2 -O /tmp/tmpuUGdlI.py
File "/usr/lib/python2.7/site-packages/backoff/_async.py", line 22
async def _call_handlers(hdlrs, target, args, kwargs, tries, elapsed, **extra):
^
SyntaxError: invalid syntax
removing /tmp/tmpuUGdlI.py
running install_egg_info
Writing /home/fryfrog/aur/python-backoff/pkg/python2-backoff/usr/lib/python2.7/site-packages/backoff-1.8.0-py2.7.egg-info
While backoff seems to be working on python 3.5-3.7, it seems that that it fails with python27 giving a weird exception, even for a simple use case:
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
max_time=60)
def test_delete_project(cl_admin, cl_normal, slug):
with pytest.raises(JIRAError) as ex:
assert cl_normal.delete_project(slug)
assert 'Not enough permissions to delete project' in str(ex.value) \
or str(ex.value).endswith('is not a Project, projectID or slug')
assert cl_admin.delete_project(slug)
Failure:
__________________________________________________________________________________________________________________________ test_delete_project __________________________________________________________________________________________________________________________
args = (), kwargs = {}, max_tries_ = None, max_time_ = 60, tries = 1, start = datetime.datetime(2019, 5, 30, 8, 38, 0, 783162), wait = <generator object expo at 0x10cd4a5f0>, elapsed = 8e-06
details = (<function test_delete_project at 0x10cc6d1b8>, (), {}, 1, 8e-06)
@functools.wraps(target)
def retry(*args, **kwargs):
# change names because python 2.x doesn't have nonlocal
max_tries_ = _maybe_call(max_tries)
max_time_ = _maybe_call(max_time)
tries = 0
start = datetime.datetime.now()
wait = _init_wait_gen(wait_gen, wait_gen_kwargs)
while True:
tries += 1
elapsed = timedelta.total_seconds(datetime.datetime.now() - start)
details = (target, args, kwargs, tries, elapsed)
try:
> ret = target(*args, **kwargs)
E TypeError: test_delete_project() takes exactly 3 arguments (0 given)
.tox/py27/lib/python2.7/site-packages/backoff/_sync.py:94: TypeError
Update: I ended up using tenacity.retry which aparantly works with all python versiosns.
Get the max_tries from the function to get the dynamically from the function
It would be useful to have an on_attempt
callback which was called before every invocation of the decorated function - seems like that would complete the callback suite :)
My use-case is updating some state (i.e. "connecting" / "waiting" / "connected") that it's not practical to modify from inside my decorated function. if I wasn't using backoff, I would update the state immediately before and after calling my connect function.
I can make a PR if there's support for this idea!
Hi,
I might be missing something here, I tried looking into docs and all issues, but didn't get anything..
Here what I want, I want to return default value once all the retrying is done, like
def backoff_hdlr(details):
return None # tried getting value from URL, now just return None
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
on_backoff=backoff_hdlr)
def get_url(url):
return requests.get(url)
I'm not sure if I'm doing something wrong or your aiohttp
example is wrong! Seems backoff decorator doesn't catch aiohttp exceptions automatically and you need to raise the error manually:
@backoff.on_exception(backoff.expo,
aiohttp.ClientError,
max_tries=4)
async def get_url(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
response.raise_for_status()
return await response.text()
if I remove response.raise_for_status()
from the code above, no retries will happend on aiohttp.ClientError
.
in some cases to know if a backoff is retryable vs perm error, we need to inspect the exception instance to match against retryable error codes.
Hi,
with a switch to poetry it is not possible to install backoff using pip from source distribution.
$ pip install backoff --no-binary :all:
I reported the issue to poetry (python-poetry/poetry#760), but I thought it would make also sense to report it here. Read issue on poetry for longer version of the problem.
Would it be possible to switch back to setup.py
? For now I'm pinning to older version of backoff. In any case thank you for your work and time.
Hello all,
I was wondering if it's possible to pass class method to on_backoff
decorator parameter ?
Like this :
@backoff.on_exception(
backoff.expo,
ConnectionClosedError,
on_backoff=self.my_method,
)
def func_test(self):
pass
Of course this function doesn't work because of self.backoff_redis_error_log
returning this error :
on_backoff=self.backoff_redis_error_log,
NameError: name 'self' is not defined
Because of AWS CloudWatch (see https://pypi.python.org/pypi/watchtower), I want to use backoff to configure the logging package. If CloudWatch is inaccessible, an exception is thrown when attempting to configure a logger that uses CloudWatch. Using backoff worked great until I realized that I wasn't seeing any more 'backoff' log entries in CloudWatch. Unfortunately, using backoff in this way initializes the 'backoff' logger before logging is initialized. It also pollutes this logger for the duration of the process as there is no known way to reconfigure a logger. Thus, I would like to tell backoff to use a different logger either by object or by name. It would be best for this to be configured via decorators.
Hi,
the setup.py states that backoff should work with python2.7.
But the used NullHandler was introduced in python 3.1: https://docs.python.org/3.2/library/logging.handlers.html?highlight=nullhandler#logging.NullHandler
The error message when trying to install is:
python2.6 setup.py
Traceback (most recent call last):
File "setup.py", line 3, in <module>
import backoff
File "/home/bonko/git/backoff/backoff.py", line 104, in <module>
logger.addHandler(logging.NullHandler())
AttributeError: 'module' object has no attribute 'NullHandler'
I currently have a function that looks like the following:
@backoff.on_exception(
backoff.expo,
RuntimeError,
max_time=30
)
def get_rules(url):
response = requests.get(url)
if not response.ok:
msg = 'Unable to get latest rules: HTTP {} {}'.format(
response.status_code,
response.reason
)
raise RuntimeError(msg)
return response.text
I'm having trouble when unit testing this since now it is retrying for 30 seconds.
What should I mock/monkeypatch so this won't retry for 30 seconds?
>>> backoff.on_exception(print, backoff.expo, Exception, max_tries=8)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: on_exception() got multiple values for argument 'max_tries'
File "/home/paul/.virtualenvs/eacollector/lib/python3.2/site-packages/backoff.py", line 254, in retry
invoc = _invoc_repr(target, args, kwargs)
File "/home/paul/.virtualenvs/eacollector/lib/python3.2/site-packages/backoff.py", line 157, in _invoc_repr
args_out = ", ".join(unicode(a) for a in args)
File "/home/paul/.virtualenvs/eacollector/lib/python3.2/site-packages/backoff.py", line 157, in <genexpr>
args_out = ", ".join(unicode(a) for a in args)
NameError: global name 'unicode' is not defined
ERROR teleflask.server.mixins._execute_command:
Failed calling command '/test' (<function test at 0x10f7836a8>):
Traceback (most recent call last):
File "/path/to/teleflask/teleflask/server/mixins.py", line 499, in _execute_command
self.process_result(update, func(update, text))
File "/path/to/teleflask/teleflask/server/base.py", line 526, in process_result
from ..messages import Message
File "/path/to/teleflask/teleflask/messages.py", line 230, in <module>
class DocumentMessage(Message):
File "/path/to/teleflask/teleflask/messages.py", line 315, in DocumentMessage
def send(self, sender: PytgbotApiBot, receiver, reply_id)->PytgbotApiMessage:
File "/path/to/teleflask/virtualenv3.6.venv/lib/python3.6/site-packages/backoff/_decorator.py", line 141, in decorate
if asyncio.Task.current_task() is not None:
File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/events.py", line 671, in get_event_loop
return get_event_loop_policy().get_event_loop()
File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/events.py", line 583, in get_event_loop
% threading.current_thread().name)
RuntimeError: There is no current event loop in thread 'Thread-1'.
So what fails is:
asyncio.Task.current_task()
When calling teleflask/messages.py
:315.
The actual fail in backoff is _decorator.py
(function def decorate(target)
after asyncio.iscoroutinefunction(target)
was false.
For reference, this is luckydonald/teleflask@777953
, running examples/example2.py
. (Flask internal debug server)
Mac OS 10.9.5
Python 3.6.0
(via brew)
Python 3.6.0 (default, Jan 1 2017, 18:45:22)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
backoff: Maybe 1.4.0
?
python -c 'from backoff import __version__ as v; print(v)'
importError: cannot import name '__version__'
backoff
1.9.1 appears to have added a top-level tests
package that is installed along with the backoff
package. However, installing the tests
package as part of backoff
's main Python code is causing problems for some of my organization's own test suites, which also reside in a package named tests
.
This is probably something we can work around by changing our sys.path
, but it's surprising to me that installation of a package would install its own top-level tests
module.
Hello, I am writing a function which yields results. It seems backoff.on_exception does not work with yield keyword.
@backoff.on_exception(
backoff.expo, (ConnectionError, ConnectTimeout), max_tries=MAX_RETRIES)
def fib():
for i in range(1, 10):
raise ConnectionError
yield i
It seems after the generator is generated, it will not go to the retry logic again. Is there any way to fix this?
When it comes time to shut down the app, I would like to get a list of all of the asyncio.Task's that are sleeping due to backoff and cancel them (causing the real exceptions - not asyncio.CancelledException - to be raised up the stacks). This is needed when the sleep is for, say, five minutes. And I want this behavior to be conditional based on 'kwargs' passed to the @ backoff 'd function. Perhaps provide access to a global list of (details, Task) tuples that can be iterated through when, say, SIGTERM is intercepted.
It would be great to have a minimum time to wait, to compliment the max_time.
We are using backoff
in a fairly bog standard web app, and our existing app-wide loggers are all configured to use a LoggerAdapter
to emit a "tracer ID", which is simply a UUID generated by nginx and then propagated into the app.
Whenever we emit a logger, the tracer id is included, and thus, one can trace an http request throughout its entire lifecycle.
Would love to be able to configure backoff
's logger similarly, so we can see if a given http request resulted in any back offs.
Here is the relevant snippet of how we configure our own LoggerAdapter
; the key part is that we provide a wrapper around logging.getLogger
, which then returns our adapter instead:
class TraceIDLoggingAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
tracer = self.extra['trace_id']
if tracer:
tracer += " "
return '%s%s' % (tracer, msg), kwargs
class GlobalTraceID(object):
def __getitem__(self, key):
return get_global_trace_id()
def __iter__(self):
return get_global_trace_id()
def get_logger(name):
logger = logging.getLogger(name)
adapter = TraceIDLoggingAdapter(logger, GlobalTraceID())
return adapter
Any thoughts here?
There are times when values used in the backoff decorators are only known at runtime and not at import time. It would be nice to allow, for example, the on_exception
decorator to accept a callable for the max_tries
and interval
kwargs that returns the value to use when evaluated.
I sometimes have a third-party function or method that I would like to retry using backoff. Currently, I have to wrap the call in a single line function and decorate that. It would be nice if I could, instead, use a context manager.
So what I currently do is:
def do_a_thing(...):
@backoff.on_exception(...)
def _actually_do_third_party_thing():
_do_third_party_thing()
some_setup()
_actually_do_third_party_thing()
some_teardown()
And what I'd like to be able to do:
def do_a_thing(...):
some_setup()
with backoff.on_exception(...):
_do_third_party_thing()
some_teardown()
Hi! I'm currently using backoff in a project where I need to wait for a service to come online after it has just started. Everything is working great but the logs might look a little bit suspicious to a user since it's getting filled with errors while backing off.
Would you mind having an option where you could set the log level? For example setting it to DEBUG
even though it's an exception.
It would be great if backoff would be available for use with asyncio's coroutines.
This requires:
on_predicate
and on_exception
decorators.on_success
/on_backoff
/on_giveup
are coroutines.asyncio.sleep()
instead of time.sleep()
.Obviously sync and async versions of can't be trivially combined.
This can be solved in one of the following ways:
Check in on_predicate
/on_exception
is wrapped function is coroutine and switch between sync and async implementations. Notice that in general time.sleep
can't be used with asyncio, only in separate thread due to the nature of async code. This means that both implementations - sync and async - in single program will be used very rarely.
Also I don't see easy way of sharing code between sync/async versions. At least tests will be completely duplicated.
Reimplement backoff
using async primitives in separate library. Unfortunately this leads to code duplication.
As starting point I forked backoff
and reimplemented it with async primitives: https://github.com/rutsky/aiobackoff
It passes all tests and now I'm trying to integrate it with my project.
Please share ideas and intents of implementing asyncio support in backoff library, I would like to share efforts as much as possible.
If there are no indents of adding asyncio support to backoff
I can publish aiobackoff
fork.
Hi!
Could i ask you for adding tests into release tarball? It can be used for testing the package after build.
Not quite sure if related to #8, but I am having an exception e
with an e.error_number
attribute.
How can I check that on the @on_exception
decorator?
@backoff.on_exception(backoff.expo, SomeException, max_tries=7, jitter=None)
def foobar():
if should_backoff:
raise SomeException(error_number=4458) # please do a backoff
else:
raise SomeException(error_number=42) # raise
Using backoff under Python 3.8 produces the following warning:
/home/discosultan/.local/lib/python3.8/site-packages/backoff/_async.py:15
/home/discosultan/.local/lib/python3.8/site-packages/backoff/_async.py:15: DeprecationWarning: "@coroutine" decorator is deprecated since Python 3.8, use "async def" instead
return asyncio.coroutine(coro_or_func)
Uvicorn runs server in event loop and backoff raises TypeError
Traceback (most recent call last):
File "/Users/sergey/projects/nuc-gateway/venv/bin/uvicorn", line 11, in <module>
load_entry_point('uvicorn==0.8.4', 'console_scripts', 'uvicorn')()
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/uvicorn/main.py", line 258, in main
run(**kwargs)
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/uvicorn/main.py", line 279, in run
server.run()
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/uvicorn/main.py", line 307, in run
loop.run_until_complete(self.serve(sockets=sockets))
File "uvloop/loop.pyx", line 1451, in uvloop.loop.Loop.run_until_complete
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/uvicorn/main.py", line 314, in serve
config.load()
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/uvicorn/config.py", line 186, in load
self.loaded_app = import_from_string(self.app)
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/uvicorn/importer.py", line 20, in import_from_string
module = importlib.import_module(module_str)
File "/Users/sergey/.pyenv/versions/3.7.3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "./app/api_app.py", line 14, in <module>
from app.api import api_v1
...
File "./app/itm/client.py", line 142, in <module>
class APIClient(requests_client.Client):
File "./app/itm/client.py", line 450, in APIClient
@retry401
File "/Users/sergey/projects/nuc-gateway/venv/lib/python3.7/site-packages/backoff/_decorator.py", line 181, in decorate
"backoff.on_exception applied to a regular function "
TypeError: backoff.on_exception applied to a regular function inside coroutine, this will lead to event loop hiccups. Use backoff.on_exception on coroutines in asynchronous code.
Is there a way to specify a minimum delay? For example, we want the first retry to be at 15 seconds and then increase the delay per the formula?
a = minimum + factor * base ** n
If not, any issue if we add one? I can submit a PR.
I'm migrating from the retrying library to backoff
and one thing I noticed between the two libraries is that I always have to define what Exception I want to retry on when using backoff
Would it be viable to default to Exception
and make defining exceptions in on_exception
as optional?
It seems we are trying to append handlers to the default one. Do you think it may make more sense to allow users to override them or even have the ability to disable the default handlers?
def _handlers(hdlr, default=None):
defaults = [default] if default is not None else []
if hdlr is None:
return defaults
if hasattr(hdlr, '__iter__'):
return defaults + list(hdlr)
return defaults + [hdlr]
To
def _handlers(hdlr, default=None):
if hdlr is None:
return [default] if default is not None else []
if hasattr(hdlr, '__iter__'):
return list(hdlr)
return [hdlr]
First off, crackin' little library. Really like the api.
However, in backoff.expo:
https://github.com/litl/backoff/blob/master/backoff/_wait_gen.py#L14
Having n=0 means the second attempt will always happen only 1 second after the first attempt, regardles of base, factor.
My assumption was that base would set the minimum backoff time.
Is this intended behaviour? To me it seems counter-intuitive to the meaning of the 'base' parameter!
This could be addressed in one of two simple ways.
Change
https://github.com/litl/backoff/blob/master/backoff/_wait_gen.py#L14
to
n = 1
OR
change
https://github.com/litl/backoff/blob/master/backoff/_wait_gen.py#L16
to
a = base * factor ** n
Either of these changes would mean the minimum retry interval was equal to base, and not 1.
Hey, thank you for creating such a convenient library! I really like it and have no problems using it, but here's a small suggestion:
If a given handler does not make use of the details
object, one has to either write:
def default_handler(_: dict) -> None:
'''Standard exception handling'''
log(sys.exc_info()[1])
or:
def default_handler(details: dict) -> None:
'''Standard exception handling'''
if details:
pass
log(sys.exc_info()[1])
in order to get a 10/10 result with pylint.
It would be better if you provided a way of not having to deal with this. Example:
from inspect import signature
if len(signature(handler).parameters) < 1:
def wrapper(_: dict) -> None:
'''Wrapper for handlers that do not make use of details'''
return handler()
actual_handler = wrapper
else:
actual_handler = handler
So that this is possible:
def default_handler() -> None:
'''Standard exception handling'''
log(sys.exc_info()[1])
Greetings!
I've been working on getting python-backoff to work in Fedora's Rawhide release, which recently upgraded to the recently released Python 3.7. There are three test failures there:
_________________________________________ test_on_exception_coro_cancelling __________________________________________
event_loop = <_UnixSelectorEventLoop running=False closed=False debug=False>
@pytest.mark.asyncio
def test_on_exception_coro_cancelling(event_loop):
sleep_started_event = asyncio.Event()
@backoff.on_predicate(backoff.expo)
@asyncio.coroutine
def coro():
sleep_started_event.set()
try:
yield from asyncio.sleep(10)
except asyncio.CancelledError:
return True
return False
task = event_loop.create_task(coro())
> yield from sleep_started_event.wait()
E TypeError: cannot 'yield from' a coroutine object in a non-coroutine generator
tests/python34/test_backoff_async.py:568: TypeError
_______________________________________ test_on_exception_on_regular_function ________________________________________
@pytest.mark.asyncio
def test_on_exception_on_regular_function():
# Force this function to be a running coroutine.
> yield from asyncio.sleep(0)
E TypeError: cannot 'yield from' a coroutine object in a non-coroutine generator
tests/python34/test_backoff_async.py:578: TypeError
_______________________________________ test_on_predicate_on_regular_function ________________________________________
@pytest.mark.asyncio
def test_on_predicate_on_regular_function():
# Force this function to be a running coroutine.
> yield from asyncio.sleep(0)
E TypeError: cannot 'yield from' a coroutine object in a non-coroutine generator
tests/python34/test_backoff_async.py:590: TypeError
backoff
does not restart wait_gen
counters on successful function call.
It would be amazing to expose a conditional backoff API that would allow users creating things like API clients to expose to their end users the option to retry or not
I have a project where I configure loggers in the root __init__.py
:
# foo-project/foo/__init__.py
import logging
logging.getLogger('backoff').setLevel(logging.FATAL)
# foo-project/foo/bar.py
import backoff
@backoff...
def fn(...):
pass
However, this means that my configuration is overwritten because my __init__.py
is necessarily imported before bar.py
, which then imports backoff
and hits
Lines 11 to 13 in 229d30a
There's a workaround in that I can change __init__.py
to look like
import logging
import backoff # not used here; side-effect trigger logging initialization
logging.getLogger('backoff').setLevel(logging.FATAL)
Which is a little inelegant.
Is there a way that backoff could be written to respect existing settings of the logger if they exist? This is perhaps also a broader problem with logging
or idiomatic Python logging patterns; if that is the case feel free the close this out (though if you happen to know any good relevant documents on logging best practices, I'd love to have them -- I wasn't able to find anything better than my workaround above.)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.