GithubHelp home page GithubHelp logo

florimondmanca / aiometer Goto Github PK

View Code? Open in Web Editor NEW
354.0 354.0 12.0 73 KB

A Python concurrency scheduling library, compatible with asyncio and trio.

Home Page: https://pypi.org/project/aiometer/

License: MIT License

Python 97.01% Makefile 2.99%
async asyncio concurrency-management flow-control python trio

aiometer's Introduction

NOTICE

Now unused. See florimondmanca/www.


Personal

Build Status Angular DigitalOcean

This is the repository for the frontend application powering my personal blog.

For the backend API, see personal-api.

Install

Install Angular CLI:

$ npm install -g @angular/cli

Install the dependencies:

$ npm install

Quickstart

Create an environment file called .env (it will be excluded from version control) at the project root, containing the following variables:

  • API_KEY: a valid API key created via the backend admin site.
  • BACKEND_URL: the URL to the backend root (without trailing slash).

For example:

# .env
API_KEY=myapikey
BACKEND_URL=http://localhost:8000

Generate your development environment file:

$ npm run config -- --env=dev

Start the development server, which will run on http://localhost:4200/:

$ ng serve -c dev

Using server-side rendering

Server side rendering is implemented using Angular Universal.

Server-side rendering allows to send fully-rendered HTML pages to clients, instead of sending them a blank page and letting Angular fill it in the browser. This reduces the "first meaningful paint" time, helps with referencing and allows integration with social media.

To use the server-rendered app, you must first create a build of the app:

$ npm run build:dev

Note: in production, use npm run build instead to create a production-optimized build.

Then start the server-rendered app (an Express server):

$ npm run serve:ssr

Scripts

See package.json for the available NPM scripts.

CI/CD

TravisCI is configured on this repo and generates a production build on every push to a branch.

aiometer's People

Contributors

florimondmanca avatar graingert avatar mxrch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

aiometer's Issues

Bump anyio dependency to latest major version

Hi, are there any plans to bump the anyio dependency to 4.x.y?

There was a fairly important change in version 4.0.0 where anyio now raises the Python-native ExceptionGroup rather than its own. This means that anyio users can now handle exceptions with except*.

I love using aiometer but being pinned to anyio~=3.2 prevents me from using the new Python-native exception group error handling.

Thanks!

Allow AsyncGenerator or AsyncIterator for "args"

It would be useful to be able start some tasks before all of the arguments are available. This appears to be supported in trimeter, although I haven't tested since the last commit was 4 years ago.

As an example use case, I'm currently trying to consume a paginated API. I have an asynchronous generator that yields the items from a page while pre-loading the next page. As aiometer is right now, (I believe) I need to either generate the whole list of items at first (which requires downloading each page sequentially until I get an empty page), or yield chunks of results and call aiometer.amap on each chunk. It would be cleaner if I could pass the AsyncGenerator directly to amap and have it await items as needed.

Interaction with tenacity retry

Hi,

I was wondering whether aiometer interacts well with tenacity.retry? Specifically, I would like to rate limit a large number of requests and retry requests.

async with aiometer.amap(..., max_per_second=10)

Update requirements

Could you update or loosen the version specifications in the requirements of the package? Specifically the restriction for typing-extensions~=3.10 is conflicting for me, as I need a newer version.

swap the stream contexts with the taskgroup context

        with send_channel, receive_channel:
            async with anyio.create_task_group() as task_group:

might be better as

        async with anyio.create_task_group() as task_group:
            with send_channel, receive_channel:

this way closing the amap context manager (None, None, None)ly would allow currently running tasks to finish, and prevent new tasks being added. Closing the amap context manager (type[T], T, tb)ly would still cancel all tasks

Handling of exceptions and documentation

Hi! Thank you for writing this useful library!

I couldn't find any documentation about exceptions. Can we use aiometer.run_all and make it continue even if there are exceptions so that we can gather them all at the end?

Thank you.

Best practice for handling rate limits

I have bit of code which i have added aiometer. What iam trying to understand is how best to optimize it to get the maximum performance. The API iam calling has a rate limit of 300 requests per minute hence setting the max_per_second to 5 (if i understand this correctly) should be within the limits however, i hit 429's even then. My code is below:

async def async_iterate_paginated_api(
    function: Callable[..., Awaitable[Any]], **kwargs: Any
) -> AsyncGenerator[List[Any], None]:
    """Return an async iterator over the results of any API."""
    page_number = 1
    response = await function(**kwargs, pageNumber=page_number)
    yield response.get("entities") 
    total_pages = response.get('pageCount')
    pages = [
        page
        for page in range(2, total_pages + 1)
    ]
    async def process(function: Callable[..., Awaitable[Any]], kwarg: Any, page: Any):
        response = await function(**kwarg,  pageNumber=page)
        return response.get('entities')

    async with aiometer.amap(functools.partial(process, function, kwargs), pages, max_per_second=5) as results:
        async for result in results:
            yield result

Any feedback would be helpful.

Simplifying amap usage

Reading through the docs it came to my mind that the async with usage of amap doesn’t “look necessary” from a UX perspective.

When using it we always have to do two things:

  • Enter a context with async with (looks like an implementation detail)
  • Iterate over results with async for

As a user really only care about the second operation.

So what if we moved async with inside the implementation of amap, so that users can just do...

results = [result async for result in amap(process, items)]

requests as list

I'm using your library to try and drop catch some domain names, so I query the api with the same request several times per second.
Do I have to always generate a list of requests with the same element repeated thousands of times or there is a more efficient way to do it?
Thanks

`gather` drop-in replacement

Hello!

asyncio's own gather is a bit of a dated design but still pretty common, at least in codebases I'm dealing with at the moment—converting them to amap would be nice but it would also be dead useful to be able to drop-in replace with an aiometer.gather in a like-for-like fashion to add metering separately.

It's not impossible to shim on top of amap but would it be something you'd consider bringing into scope in the library itself? Especially since the exception handling semantics of gather can be a bit subtle.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.