GithubHelp home page GithubHelp logo

valeriansaliou / bloom Goto Github PK

View Code? Open in Web Editor NEW
705.0 705.0 47.0 954 KB

:cherry_blossom: HTTP REST API caching middleware, to be used between load balancers and REST API workers.

Home Page: https://crates.io/crates/bloom-server

License: Mozilla Public License 2.0

Rust 91.48% Dockerfile 0.91% Shell 7.61%
cache ddos dos http infrastructure performance redis rest rust scale speed

bloom's Introduction

👋 Hey! I am Valerian.

I am a Software Engineer and Product Designer. I spend my time building things.

What I am working on:

  • I am currently CTO at Crisp, a customer support SaaS, that I co-founded with Baptiste Jamin.
  • On the side, I am also working on Prose, a decentralized team messaging app, powered by XMPP.
  • I have a personal website, on which I sometimes blog, publish pictures and list books I have read.
  • I maintain popular OSS projects: Sonic (search index), Vigil (status page), Sales Tax (VAT calculator), and more.
  • Oh, and one last thing: I co-invented MakAir, the world's first open-source medical ventilator.

What I am interested in:

  • My domains of expertise are: messaging, protocols, cryptography, the XMPP standard, Rust programming, JavaScript full-stack development, application design, distributed systems, microservices and fault-tolerant architectures.
  • On my spare time, I also like to learn about: economics, 3D printing and CAD modeling.

🔒 All of my work gets signed with my 🔑GPG key. Use it to verify the authenticity of anything I publish.

bloom's People

Contributors

abbudao avatar eijebong avatar malanius avatar rafael-castro avatar valeriansaliou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bloom's Issues

Can Bloom use Bloom-Request-Shard: 0 as the default when the header is not found?

Hi there @valeriansaliou
Your fix for the double slashes in the pathname worked perfectly.
Now I do have one more question:

When there is no header Bloom-Request-Shard in the request, Bloom returns with a "bloom-status: REJECT". If shard 0 is always the default is it possible to make Bloom consider it is shard 0 even when there is no shard header?

For example, using Crisp's Status Page when I set an API address for it to check the health status, I can't set headers, so Bloom is always returning "bloom-status: REJECT". There are other use cases where requiring the header shard even for the default shard 0 won't be a great experience, like when you offer a public API.

By the way, do you have some ETA for the next version bump? I'm already testing in our staging and plan to soon deploy bloom and sonic at production. Will let you know when it happens.

Thanks!

Cached response is incomplete

Hi, I am getting clipped responses when set up with nginx. First request looks good (using curl), but second gets part of the response clipped (typically last bit of ), docker-compose:

version: '3'

services:
  redis:
    image: redis
    restart: always

  dars:
    build: ../..
    volumes:
      - ../../data:/data:ro


  bloom:
    image: valeriansaliou/bloom:v1.28.0
    depends_on:
      - redis
      - dars
    volumes:
      - ./bloom-config.cfg:/etc/bloom.cfg:ro

  nginx:
    image: nginx
    depends_on:
      - bloom
    ports:
      - 8080:80
    volumes:
      - ./nginx:/etc/nginx:ro

ref: gauteh/dars#9

Is bloom adding an extra / in the pathname on purpose?

Hi there @valeriansaliou - first I would like to thank you for adding support for forced locale at sonic. I just found out about bloom and spend some time testing it today...

When I use Bloom to proxy to a demo express js api, it is adding an extra / in the pathname.

Exemple:

curl --header "Bloom-Request-Shard: 0" http://127.0.0.1:5555/testing-path

The req object in the express js get this values:

Url {
     protocol: null,
     slashes: null,
     auth: null,
     host: null,
     port: null,
     hostname: null,
     hash: null,
     search: null,
     query: null,
     pathname: '//testing-path',
     path: '//testing-path',
     href: '//testing-path',
     _raw: '//testing-path' },
 params: { '0': '//testing-path' }

I can bypass this by just adding an extra / before each of my routes, but is this the expected behavor for Bloom's proxying process?

Thanks!

P.s.: I'm loving your work!

Multi proxy.shard always set to first

Can't get multi proxy.shard config correct because hard code for set proxy

fn map_shards() -> [Option<Uri>; MAX_SHARDS as usize] {
    // Notice: this array cannot be initialized using the short format, as hyper::Uri doesnt \
    //   implement the Copy trait, hence the ugly hardcoded initialization vector w/ Nones.
    let mut shards = [
        None, None, None, None, None, None, None, None, None, None, None, None, None, None, None,
        None,
    ];

    for shard in &APP_CONF.proxy.shard {
        // Shard number overflows?
        if shard.shard >= MAX_SHARDS {
            panic!("shard number overflows maximum of {} shards", MAX_SHARDS);
        }

        // Store this shard
        shards[shard.shard as usize] = Some(
            format!(
                "http://{}:{}",
               // always set to first 
                APP_CONF.proxy.shard[0].host, APP_CONF.proxy.shard[0].port
            )

            .parse()
            .expect("could not build shard uri"),
        );
    }

    shards
}

should be


fn map_shards() -> [Option<Uri>; MAX_SHARDS as usize] {
    // Notice: this array cannot be initialized using the short format, as hyper::Uri doesnt \
    //   implement the Copy trait, hence the ugly hardcoded initialization vector w/ Nones.
    let mut shards = [
        None, None, None, None, None, None, None, None, None, None, None, None, None, None, None,
        None,
    ];

    for shard in &APP_CONF.proxy.shard {
        // Shard number overflows?
        if shard.shard >= MAX_SHARDS {
            panic!("shard number overflows maximum of {} shards", MAX_SHARDS);
        }

        // Store this shard
        shards[shard.shard as usize] = Some(
            format!(
                "http://{}:{}",
                shard.host, shard.port
            )
            .parse()
            .expect("could not build shard uri"),
        );
    }

    shards
}

Panicked at 'could not spawn redis pool'

Hi there @valeriansaliou :)
I just updated Bloom with the lastest cargo package (before I had it compiled from source) and now I'm getting this error:

root@api02:~/www/api# RUST_BACKTRACE=1 /root/.cargo/bin/bloom -c /etc/bloom.cfg thread 'main' panicked at 'could not spawn redis pool', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/bloom-server-1.26.0/src/cache/store.rs:97:31 stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39 1: std::panicking::default_hook::{{closure}} at src/libstd/sys_common/backtrace.rs:70 at src/libstd/sys_common/backtrace.rs:58 at src/libstd/panicking.rs:200 2: std::panicking::rust_panic_with_hook at src/libstd/panicking.rs:215 at src/libstd/panicking.rs:478 3: std::panicking::begin_panic 4: std::sync::once::Once::call_once::{{closure}} 5: std::sync::once::Once::call_inner at src/libstd/sync/once.rs:387 6: <bloom::APP_CACHE_STORE as core::ops::deref::Deref>::deref 7: bloom::main 8: std::rt::lang_start::{{closure}} 9: main 10: __libc_start_main 11: _start
Do you have any idea of why?
I had 2 servers with bloom installed from the source, the one I didn't update is still working, the other one isn't. Also, I created a third server and made a clean install from cargo, and got the same error.

Thanks

Still getting extra leading slashes

Surely related to #7. I'm trying out the current HEAD and we're getting "501 Not Extended" from Bloom, and seeing this on the API worker, using PostMan (note the double slash):

127.0.0.1 - - [21/May/2019:20:20:13 +0000] "GET //api/ipam/prefixes HTTP/1.0" 404 16686 "-" "PostmanRuntime/7.13.0"

The same call directly to the API worker, changing only the host/port, works fine.

Support for Armv7 architecture on Docker images

Hey @valeriansaliou, how are you doing?
I noticed that recently, you have merged support for ArmV7, which is excellent! On the other hand, we still need a Docker image published on the registry compatible with the architecture.
Do you plan to support this, or are you open to contributions?

Metrics & Plugin Interface

Goal: build a generic plugin system on Bloom to be able to extract data and inject live configuration changes (an I/O interface).

On which, we may integrate a wide range of stuff without needing to touch the Bloom core. For instance, a metrics system (which is relevant for the Bloom use case).

Early requirements

Metrics

  1. Instance RPS (total, per request type)
  2. Success/error ratio
  3. Request/response latency per Bloom-Status (MISS, HIT, DIRECT)
  4. Cache hit/miss ratio (probably per-route?)

Request: Client proxy

I have a remote https://api server that I do not control.

I would like to use bloom to cache the responses from the remote server:

curl -H "Bloom-Request-Shard: 0;" -H "Bloom-Request-Ignore: 1;" http://bloom-host:8080/my-url

In this case I am:

  • making an http request to bloom with an https request to the api server
  • specifying a new flag Request-Ignore to disable cache on the request

I have a few remote API servers, so an example of how to configure multiple hosts/shards would be useful.

NOTE: I am currently able to use bloom with http to accomplish this task, with the exception of controlling the ignore cache from the client.

failed unwrapping body value for key - compression / gzip

Hi

I have Bloom setup and working for an API.

However, when I request that the API server gzips its responses Bloom errors with "failed unwrapping body value for key".

The following logs are with request header Accept-Encoding: gzip

Bloom v1.28.

2020-01-07T12:47:01.578493131Z (DEBUG) - accepted new connection (10.21.80.3:53826)
2020-01-07T12:47:01.578498587Z (DEBUG) - handled new request
2020-01-07T12:47:01.578509145Z (DEBUG) - scheduling Read for: 0
2020-01-07T12:47:01.578513162Z (DEBUG) - adding I/O source: 2122317837
2020-01-07T12:47:01.578517346Z (DEBUG) - scheduling Read for: 13
2020-01-07T12:47:01.578522173Z (DEBUG) - loop poll - 56.781µs
2020-01-07T12:47:01.578528055Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 554344367 }
2020-01-07T12:47:01.578536241Z (DEBUG) - loop process, 5.171µs
2020-01-07T12:47:01.578543324Z (DEBUG) - read 840 bytes
2020-01-07T12:47:01.57855415Z (DEBUG) - parsed 13 headers (840 bytes)
2020-01-07T12:47:01.578560065Z (DEBUG) - incoming body is content-length (0 bytes)
2020-01-07T12:47:01.578565802Z (DEBUG) - called proxy serve
2020-01-07T12:47:01.578587346Z (INFO) - handled request: GET on /api/foobar
2020-01-07T12:47:01.578594454Z (DEBUG) - hashing value: 
2020-01-07T12:47:01.578600584Z (DEBUG) - hashing value: [HTTP/1.1|GET|/api/foobar|foo=bar|null]
2020-01-07T12:47:01.578609508Z (DEBUG) - generated bucket: [HTTP/1.1|GET|/api/foobar|foo=bar|null] with hash: 18fa98d6
2020-01-07T12:47:01.578617019Z (INFO) - tunneling for ns = bloom:0:c:dc56d17a:18fa98d6
2020-01-07T12:47:01.578623002Z (DEBUG) - key: bloom:0:c:dc56d17a:18fa98d6 cacheable, reading cache
2020-01-07T12:47:01.578628817Z (DEBUG) - loop poll - 102.255µs
2020-01-07T12:47:01.578632984Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 554454399 }
2020-01-07T12:47:01.578637083Z (DEBUG) - loop process, 12.033µs
2020-01-07T12:47:01.580016286Z (INFO) - acquired empty meta value from cache
2020-01-07T12:47:01.580038525Z (DEBUG) - reuse idle connection for "http://localhost:8080"
2020-01-07T12:47:01.580054487Z (DEBUG) - loop poll - 1.46448ms
2020-01-07T12:47:01.580058337Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 555933469 }
2020-01-07T12:47:01.580061846Z (DEBUG) - loop process, 6.832µs
2020-01-07T12:47:01.580065531Z (DEBUG) - scheduling Read for: 3
2020-01-07T12:47:01.580075462Z (DEBUG) - flushed 840 bytes
2020-01-07T12:47:01.580078819Z (DEBUG) - loop poll - 56.685µs
2020-01-07T12:47:01.580082126Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 556001391 }
2020-01-07T12:47:01.58008547Z (DEBUG) - loop process, 5.068µs
2020-01-07T12:47:01.584704063Z (DEBUG) - read 0 bytes
2020-01-07T12:47:01.584723689Z (DEBUG) - read eof
2020-01-07T12:47:01.584728056Z (DEBUG) - dropping I/O source: 12
2020-01-07T12:47:01.584733428Z (DEBUG) - loop poll - 4.652152ms
2020-01-07T12:47:01.58473694Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 560661347 }
2020-01-07T12:47:01.584740703Z (DEBUG) - loop process, 6.55µs
2020-01-07T12:47:01.585265Z (DEBUG) - read 203 bytes
2020-01-07T12:47:01.585281022Z (DEBUG) - parsed 5 headers (175 bytes)
2020-01-07T12:47:01.585291208Z (DEBUG) - incoming body is chunked encoded
2020-01-07T12:47:01.585296811Z (DEBUG) - incoming chunked header: 0xA (10 bytes)
2020-01-07T12:47:01.58530176Z (DEBUG) - loop poll - 474.793µs
2020-01-07T12:47:01.585307266Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561146833 }
2020-01-07T12:47:01.585316641Z (DEBUG) - loop process, 5.872µs
2020-01-07T12:47:01.585321409Z (DEBUG) - loop poll - 19.84µs
2020-01-07T12:47:01.585326183Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561175675 }
2020-01-07T12:47:01.585331302Z (DEBUG) - loop process, 6.353µs
2020-01-07T12:47:01.585336267Z (DEBUG) - incoming chunked header: 0x8 (8 bytes)
2020-01-07T12:47:01.585341077Z (DEBUG) - loop poll - 8.796µs
2020-01-07T12:47:01.585345769Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561193262 }
2020-01-07T12:47:01.585350845Z (DEBUG) - loop process, 4.999µs
2020-01-07T12:47:01.58535592Z (DEBUG) - loop poll - 2.884µs
2020-01-07T12:47:01.585359941Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561203406 }
2020-01-07T12:47:01.585363344Z (DEBUG) - loop process, 4.851µs
2020-01-07T12:47:01.58536669Z (DEBUG) - scheduling Read for: 3
2020-01-07T12:47:01.585370242Z (DEBUG) - loop poll - 13.513µs
2020-01-07T12:47:01.585373624Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561224078 }
2020-01-07T12:47:01.585376951Z (DEBUG) - loop process, 4.896µs
2020-01-07T12:47:01.585431492Z (DEBUG) - read 20 bytes
2020-01-07T12:47:01.585436611Z (DEBUG) - incoming chunked header: 0xA (10 bytes)
2020-01-07T12:47:01.585449507Z (DEBUG) - loop poll - 97.012µs
2020-01-07T12:47:01.585453095Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561328231 }
2020-01-07T12:47:01.585456393Z (DEBUG) - loop process, 4.604µs
2020-01-07T12:47:01.585459677Z (DEBUG) - loop poll - 2.959µs
2020-01-07T12:47:01.585462891Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561338129 }
2020-01-07T12:47:01.585466252Z (DEBUG) - loop process, 4.42µs
2020-01-07T12:47:01.585469561Z (DEBUG) - incoming body completed
2020-01-07T12:47:01.585472779Z (DEBUG) - scheduling Read for: 3
2020-01-07T12:47:01.585476009Z (DEBUG) - scheduling Read for: 3
2020-01-07T12:47:01.585479312Z (DEBUG) - loop poll - 14.018µs
2020-01-07T12:47:01.585482968Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561358694 }
2020-01-07T12:47:01.585486271Z (DEBUG) - loop process, 4.654µs
2020-01-07T12:47:01.585489525Z (DEBUG) - pooling idle connection for "http://localhost:8080"
2020-01-07T12:47:01.585492991Z (DEBUG) - loop poll - 8.463µs
2020-01-07T12:47:01.585496335Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561373717 }
2020-01-07T12:47:01.585499628Z (DEBUG) - loop process, 3.551µs
2020-01-07T12:47:01.585502931Z (ERROR) - failed unwrapping body value for key: bloom:0:c:dc56d17a:18fa98d6, ignoring
2020-01-07T12:47:01.585625181Z (DEBUG) - flushed 141 bytes
2020-01-07T12:47:01.585649631Z (DEBUG) - scheduling Read for: 13
2020-01-07T12:47:01.585656823Z (DEBUG) - loop poll - 107.049µs
2020-01-07T12:47:01.585662765Z (DEBUG) - loop time - Instant { tv_sec: 9052, tv_nsec: 561486028 }
2020-01-07T12:47:01.585667954Z (DEBUG) - loop process, 4.566µs
2020-01-07T12:47:06.96199733Z (DEBUG) - loop poll - 5.376220633s
2020-01-07T12:47:06.962037899Z (DEBUG) - loop time - Instant { tv_sec: 9057, tv_nsec: 937713671 }
2020-01-07T12:47:06.962043674Z (DEBUG) - loop process, 31.664µs

Config

[server]
log_level = "debug"
inet = "0.0.0.0:8081"

[control]
inet = "127.0.0.1:8811"
tcp_timeout = 300

[proxy]
[[proxy.shard]]
shard = 0
host = "localhost"
port = 8080

[cache]
ttl_default = 60 # seconds
executor_pool = 64
disable_read = false
disable_write = false
compress_body = true

[redis]
host = "xxxxxx"
port = 6379
database = 0
pool_size = 80
max_lifetime_seconds = 60
idle_timeout_seconds = 500
connection_timeout_seconds = 1
max_key_size = 1048576 # bytes
max_key_expiration = 600 # seconds

Any help would be appreciated 🙂 .

Cache ETag Computations

Currently, when a HIT is served to a client, the ETag is re-processed from cached data and added back in the response headers.

There's a small performance penalty in doing this, so we'd better cache the ETag computation in the cache layer and serve it back directly from there.

Newline being added to the response

Hey @valeriansaliou, how are you doing?
I've started to adopt bloom in my company to cache some services. One of these services is a legacy system that returns bare strings as responses (neither JSON nor XML). When adding bloom to cache responses, we noticed that a newline was being added to the response, which broke some services that weren't expecting it.
I read most of the source code and couldn't find anything that could be adding this additional character, but you know better where to look. In the meantime, I will create a minimal reproducible example to debug it better.

Unable to workaround acquired empty meta value from cache

Hello there,

I like your idea very much. I would love to use bloom server as a cache for my backend JSON REST API written in Rust. But I am unable to make it work. I spent an hours trying, but no idea. I have prepared my environment for testing on Debian Linux (Buster).

I have installed bloom server using cargo install bloom-server, now I have bloom-server 1.35.2 installed (output from bloom -V).

My redis version is 7.0.11-1rl1~buster1. Configured accordingly to your example config (also used for another caching, so I use different database number, as you will se later).

This is my bloom config:

# Bloom
# HTTP REST API caching middleware
# Configuration file
# Example: https://github.com/valeriansaliou/bloom/blob/master/config.cfg

[server]
log_level = "info"
inet = "127.0.0.1:8051"

[control]
inet = "127.0.0.1:8081"
tcp_timeout = 300

[proxy]
shard_default = 0

[[proxy.shard]]
shard = 0
host = "localhost"
port = 8205

[cache]
ttl_default = 60
executor_pool = 64

disable_read = false
disable_write = false

compress_body = true

[redis]
host = "localhost"
port = 6379

database = 41

pool_size = 80
max_lifetime_seconds = 60
idle_timeout_seconds = 600
connection_timeout_seconds = 1

max_key_size = 256000
max_key_expiration = 2592000

My nginx is proxy passing this way:

 proxy_pass http://127.0.0.1:8051;
        proxy_set_header Bloom-Request-Shard 0;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

Before there are also CORS setups (also carefully modified by your docs) but my issue is not here, because I can get my data.

Problem is, that I am missing the cache. Header Bloom-Status is MISS still. I tried to add the headers from my API Bloom-Response-TTL or without. Everything is cached into redis (keys are visible) but then I got an error. This is Bloom log (INFO level)

INFO) - handled request: GET on /api/v2/articles
(INFO) - tunneling for ns = bloom:0:c:dc56d17a:c0b944a5
(INFO) - acquired empty meta value from cache
(INFO) - handled request: GET on /api/v2/notices
(INFO) - tunneling for ns = bloom:0:c:dc56d17a:dd2cc6f4
(INFO) - acquired empty meta value from cache

In my redis database I can see:

127.0.0.1:6379[41]> keys *
1) "bloom:0:a:dc56d17a"

etc. these logs are from history not in the same moment. I am trying to figure it out.

Please have you any suggestion? I would like to have Bloom working for me.

Thanks

Bloom depends on buggy versions of libraries

https://asan.saethlin.dev/ub?crate=bloom-server&version=1.34.0

This crate depends on buggy versions of redis and bytes. Please run cargo update.

warning: the following packages contain code that will be rejected by a future version of Rust: redis v0.12.0
test cache::write::tests::it_fails_saving_cache - should panic ... thread 'cache::write::tests::it_fails_saving_cache' panicked at
'attempted to leave type `bytes::bytes::Inner` uninitialized, which is invalid', /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/panicking.rs:126:5
stack backtrace:
   0: rust_begin_unwind
             at /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:617:5
   1: core::panicking::panic_nounwind_fmt
             at /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/panicking.rs:96:14
   2: core::panicking::panic_nounwind
             at /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/panicking.rs:126:5
   3: core::mem::uninitialized
             at /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/mem/mod.rs:692:9
   4: bytes::bytes::Inner::with_capacity
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bytes-0.4.12/src/bytes.rs:1822:40
   5: bytes::bytes::Bytes::with_capacity
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bytes-0.4.12/src/bytes.rs:417:20
   6: bytes::bytes::Bytes::new
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bytes-0.4.12/src/bytes.rs:435:9
   7: <hyper::proto::chunk::Chunk as core::default::Default>::default
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hyper-0.11.27/src/proto/chunk.rs:85:29
   8: <futures::stream::concat::Concat2<S> as futures::future::Future>::poll::{{closure}}
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-0.1.31/src/stream/concat.rs:49:52
   9: core::result::Result<T,E>::map
             at /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/result.rs:746:25
  10: <futures::stream::concat::Concat2<S> as futures::future::Future>::poll
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-0.1.31/src/stream/concat.rs:46:9
  11: <futures::future::map::Map<A,F> as futures::future::Future>::poll
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-0.1.31/src/future/map.rs:30:23
  12: futures::future::chain::Chain<A,B,C>::poll
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-0.1.31/src/future/chain.rs:26:23
  13: <futures::future::and_then::AndThen<A,B,F> as futures::future::Future>::poll
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-0.1.31/src/future/and_then.rs:32:9
  14: <alloc::boxed::Box<F> as futures::future::Future>::poll
             at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-0.1.31/src/future/mod.rs:113:13
  15: bloom::cache::write::tests::it_fails_saving_cache
             at ./src/cache/write.rs:177:17
  16: bloom::cache::write::tests::it_fails_saving_cache::{{closure}}
             at ./src/cache/write.rs:176:32
  17: core::ops::function::FnOnce::call_once
             at /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
  18: core::ops::function::FnOnce::call_once
             at /root/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread caused non-unwinding panic. aborting.
error: test failed, to rerun pass `--bin bloom`

Caused by:
  process didn't exit successfully: `/root/build/target/x86_64-unknown-linux-gnu/debug/deps/bloom-97cb1c8c8a589dc2` (signal: 6, SIGABRT: process abort signal)
error: 1 target failed:

Stale while revalidate support

Hi,

I'm very interested in this proxy, it's very interesting for scalability in microservices.

Are there any plans to support Stale-while-revalidate to improve response time after cache entries expire?

Unable to use environment variables in config

Hello, I'd like to have the possibility to reference environment variables inside the configuration file. For now, it doesn't seem to work or I didn't figure out how to do that.

For example, I have bloom packed inside a docker container along with the backend and I'd like to reuse the same image on different environments and be able to configure the Redis host as each environment has its own cluster with REDIST_HOST env var.
The rest of the backend configuration is done through environment variables (following the 12fa principles) and it would be great to have the same possibility in bloom as well.

For now, I can work around this by replacing some placeholder value in the config using entry point script, but that doesn't feel as clean and it's quite rare to find something that doesn't support configuration from env vars nowadays 🙂

PS: I might be able to hack on this, but never done something in Rust before so I would need some pointers on how to deal with this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.