GithubHelp home page GithubHelp logo

h2o / quicly Goto Github PK

View Code? Open in Web Editor NEW
599.0 599.0 115.0 4.28 MB

A modular QUIC stack designed primarily for H2O

License: MIT License

C 91.67% C++ 0.49% CMake 0.66% Shell 0.36% Python 2.16% Perl 3.35% DTrace 1.20% Dockerfile 0.06% Makefile 0.06%

quicly's Introduction

H2O - an optimized HTTP server with support for HTTP/1.x, HTTP/2 and HTTP/3 (experimental)

CI Coverity Scan Build Status Fuzzing Status

Copyright (c) 2014-2019 DeNA Co., Ltd., Kazuho Oku, Tatsuhiko Kubo, Domingo Alvarez Duarte, Nick Desaulniers, Marc Hörsken, Masahiro Nagano, Jeff Marrison, Daisuke Maki, Laurentiu Nicola, Justin Zhu, Tatsuhiro Tsujikawa, Ryosuke Matsumoto, Masaki TAGAWA, Masayoshi Takahashi, Chul-Woong Yang, Shota Fukumori, Satoh Hiroh, Fastly, Inc., David Carlier, Frederik Deweerdt, Jonathan Foote, Yannick Koechlin, Harrison Bowden, Kazantsev Mikhail

H2O is a new generation HTTP server. Not only is it very fast, it also provides much quicker response to end-users when compared to older generations of HTTP servers.

Written in C and licensed under the MIT License, it can also be used as a library.

For more information, please refer to the documentation at h2o.examp1e.net.

Reporting Security Issues

Please report vulnerabilities to [email protected]. See SECURITY.md for more information.

quicly's People

Contributors

aloysaugustin avatar bhesmans avatar dch avatar deweerdt avatar ekr avatar gfx avatar hfujita avatar janaiyengar avatar jlaine avatar jsoref avatar karthikdasari0423 avatar kazuho avatar kevinclark avatar kosekmi avatar marten-seemann avatar mathiasraoul avatar maxsharabayko avatar nalramli avatar piano-man avatar sescandor avatar sharksforarms avatar sknat avatar thejokr avatar toru avatar windrunner414 avatar ymmt2005 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quicly's Issues

Priority between new data and retransmissions

Now that we have a stream scheduler, we should ensure that we use the right priorities for old vs. new data. My inclination is to always prioritize per stream priorities, and not care about old or new data.

rename ack.h, quicly_acks_t, quicly_ack_t to something else

  • quicly_acks_t is the inflight packet map (list of QUIC frames that have been sent but not yet been acknowledged)
  • quicly_ack_t is the elements of quicly_acks_t
  • the two (and related structures / functions) are defined in quicly/ack.h

As @janaiyengar has suggested, the names are confusing. We need to rename.

Broken PTO timeout calculation?

Shouldn't something like this be used? Otherwise, r->pto_count is never bigger than 1. See below the details of a test scenario.

diff --git a/include/quicly/loss.h b/include/quicly/loss.h
index a73e8e9..6ed3151 100644
--- a/include/quicly/loss.h
+++ b/include/quicly/loss.h
@@ -231,8 +231,7 @@ inline int quicly_loss_on_alarm(quicly_loss_t *r, uint64_t largest_sent, uint64_
         return quicly_loss_detect_loss(r, largest_acked, do_detect);
     }
     /* PTO */
-    if (r->pto_count == 0)
-        ++r->pto_count;
+    ++r->pto_count;
     *num_packets_to_send = 2;
     return 0;
 }

Related: "Replaces uses of TLP and RTO with PTO #134"

Test case:

  1. Run server
./cli -c t/assets/server.crt -k t/assets/server.key 0.0.0.0 4433
  1. Run client and interrupt it almost immediately (Ctrl-C)
./cli localhost 4433 -p /100000000.txt > /dev/null

What can be seen with current code:

Lots of data (supposedly retransmits) sent by the server side for about 30 seconds (idle_timeout). Inter-packet delta is ~50-70ms.

[...]
Removed: I've noticed that I included the wrong log here.

CID encryption and rotation

Using a structure like below:

struct PerServerCID {
    uint8_t version;           // hard-coded to zero
    uint16_t process_id : 4;   // for forwarding packets to the correct process
    uint16_t thread_id : 12;   // for forwarding packet to the correct thread
    uint32_t conn_id;          // unique ID per connection
    uint8_t  path_id;          // path id
};

struct CIDPlaintext {
    uint32_t     server_id[4];  // for forwarding packets to the correct server
    uint32_t     zero;
    PerServerCID per_server;
};

CID = AES_ECB(key_id || AES_ECB(key, CIDPlaintext))

Design requirements:

  • use 4-byte server ID (to store internal IP address of each server)
  • 4-byte zero to detect bogus CIDs
  • 8-byte per-server CID to support 9-byte CID when running on a single server (using 64-bit cipher)
  • encode path_id directly, to avoid adding multiple entries to the CID -> connection hashmap

Cannot decrypt packets with Wireshark

Hello,

I initiated multiple client-server connections with CLI and captured the packets with Wireshark.

./cli -v -c ~/ssl/publickey.pem -k ~/ssl/private.key -l ~/ssl/secretkeys.log 127.0.0.1 4433
./cli -v -s tmp/session -p /100000.txt 127.0.0.1 4433

Wireshark states "Secrets are not available". CLI exports only EARLY_EXPORTER_SECRET and EXPORTER_SECRET. Is it possible to export the additional secrets according to NSS Key Log Format?

Or am I missing something? I tested two Wireshark versions v2.9.1rc0-456 and v3.1.0rc0-219 that match QUIC version draft-17.

Cheers,
Chris

implement key update

And also the retirement of keys, earlier epochs. Maybe we can (ab)use the sentmap to schedule them.

implement idle close

quicly should call on_conn_close when it locally observes either the client's or the server's idle timeout expiring, then enter the draining state.

State for callbacks

I'm trying to use this library in a C++ Framework I am working on. I am having troubles finding a way to pass some kind of state into the callback functions which are called - namely on_receive and on_stream_open. Is there already a way that I'm missing, or is there no way that could be done right now?

I'm thinking of changing the signature to something like:
some_function(<args>, void* state)
so I can pass a pointer to some memory to be used in the callbacks. Global variables are nothing I can use to fix this issue..

add install target

perhaps i'm missing something. that is, if quickly is supposed to be run from within h2o.

Code should handle Initial packet injection

An attacker on the side can inject packets encrypted with Initial keys that might interfere with the connection in various ways. Once the spec has stabilized around a way forward, we should do this.

fix differences from -13

With #9 being merged, master is something that is mostly like -12 + DT, but has the following differences:

  • uses implicit ACK + Handshake_Done frame to close the lower-epoch flows ASAP
  • uses "tls13 " as the base label for deriving traffic keys

I assume that we need to fix the two issues (potentially other issues as well) once draft-13 is published.

install on OSX

This issue is most definitely an issue with my limited C knowledge, so I'm apologizing in advance for this newbie question.

I'm getting the following error when trying to build quicly:

[ 37%] Built target quicly
Scanning dependencies of target cli
[ 40%] Building C object CMakeFiles/cli.dir/deps/picotls/lib/openssl.c.o
[ 43%] Building C object CMakeFiles/cli.dir/deps/picotls/lib/pembase64.c.o
[ 46%] Building C object CMakeFiles/cli.dir/deps/picotls/lib/picotls.c.o
[ 50%] Building C object CMakeFiles/cli.dir/src/cli.c.o
[ 53%] Linking C executable cli
Undefined symbols for architecture x86_64:
  "_X509_check_host", referenced from:
      _verify_cert in openssl.c.o
  "_X509_check_ip_asc", referenced from:
      _verify_cert in openssl.c.o
ld: symbol(s) not found for architecture x86_64

I installed OpenSSL using Homebrew and set the symlinks according to this article: https://medium.com/@timmykko/using-openssl-library-with-macos-sierra-7807cfd47892

nothing can be sent when receiving a packet just after the idle close timer expires

IIUC, the primary reason of having STATELESS_RESET is to let the client notice that the connection is dead immediately, so that it can retransmit the request using a new connection. However, when a server receives a request immediately after it's idle close timer has expired, it cannot send any packet in response, because it is in the draining period. Is this the expected behavior?

IMO, we should always allow a server to respond whenever it receives a packet after the idle timeout has expired. One way of allowing that is to let an endpoint immediately discard the connection state when the idle close timer expires.

Question: why the AES_128_CTR is used in header protection

  1. After generating initial secret, setup_cipher call ptls_cipher_new to init AES_128_CTR ctx.
    But in draft-18 I find AES-ECB should be used
5.4.3.  AES-Based Header Protection
...
mask = AES-ECB(hp_key, sample)

I want to know why ?

  1. How sample is used in AES-ECB ? In my opinio, sample is used as input data which will be encrypted by hp_key under AES-ECB schedule. But I find simple is used as iv by call ptls_cipher_init. What's the role of iv under AES-CTR schedule ?

Packet numbering should start at 0 in each PN space

First of all thanks so much for this QUIC implementation! I am currently trying to create a Python QUIC implementation and interop'ing it with quicly.

One thing I noticed is that quicly seems to use a single packet number counter for all the packets it sends. The draft 18 RFC says:

Packet numbers in each space start at packet number 0.

Am I mistaken in my interpretation?

retain quicly_acks_t for some time after RTO

Even after RTO timer fires for a particular PN, we still need to keep quicly_ack_t for that PN for some more time, because it could be due to the ACK being lost. In such case, the peer will retransmit the ACK and we need to reflect that.

Dropping the server initial packet causes 600ms delay

When the server's Initial packet is sent and dropped, connection takes ~600ms to get set up. This is in part because of #98, and in part because while initial and handshake packets are sent together first, the retransmission carries only the initial packet.

The fix is here https://quicwg.org/base-drafts/draft-ietf-quic-recovery.html#rfc.section.6.2: retransmit as much unacked Initial and Handshake data on a crypto retransmission timeout that can be sent (3 x received packet size, per https://quicwg.org/base-drafts/draft-ietf-quic-transport.html#rfc.section.8.1).

g++ compile error

In include/quicly/loss.h, function quicly_loss_init, g++ report sorry, unimplemented: non-trivial designated initializers not supported.should be change to
*r = (quicly_loss_t){.conf = conf, .max_ack_delay = max_ack_delay, .tlp_count = 0, .rto_count = 0, .largest_sent_before_rto = 0, .time_of_last_packet_sent = 0, .largest_acked_packet = 0, .loss_time = INT64_MAX, .alarm_at = INT64_MAX}; quicly_rtt_init(&r->rtt, conf, initial_rtt);

In include/quicly/frame.h, function quicly_decode_new_connection_id_frame, need move the cid_len declaration before goto Fail.

I also got a warning
frame.h:683:20: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (end - *src < token_len)

And i use the libquicly.a, i got a error
error: dereferencing pointer to incomplete type ‘quicly_conn_t {aka struct st_quicly_conn_t}’
Maybe the struct should be declared in quicly.h?

Event logging

Each event will contain the following fields:

  • timestamp
  • level - one of: "packet", "frame", "cc"
    • maybe add "stream" as well?
  • event - name of the event "send", "receive", etc., which are level-specific
  • rest of the fields are event-specific

We will initially log the events using JSON, but the internal API will be kept neutral to the output format.

Running server/client issues

Hi,

I'm trying to run server and client, but when I start running both, at the client side I get a console message which says "not found", and the connection isn't established. Is there any extra step I need to do before running client and server ?

Thanks

crash when trying to send STREAM_BLOCKED

FWIW, this crash happened when h2o tried to open a unidirectional stream and noticed that it is blocked by the client.

h2o: /mydev/h2o/deps/quicly/lib/quicly.c:3071: quicly_send: Assertion `max_streams->count == max_stream->stream_id / 4' failed.
received fatal signal 6
build/quic/h2o[0x50be1d] on_sigfatal at /mydev/h2o/src/main.c:1590
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x7fb406b23340] ?? ??:0
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39)[0x7fb406784cc9] ?? ??:0
/lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7fb4067880d8] ?? ??:0
/lib/x86_64-linux-gnu/libc.so.6(+0x2fb86)[0x7fb40677db86] ?? ??:0
/lib/x86_64-linux-gnu/libc.so.6(+0x2fc32)[0x7fb40677dc32] ?? ??:0
build/quic/h2o(quicly_send+0x197a)[0x558bca] quicly_send at /mydev/h2o/deps/quicly/lib/quicly.c:2938 (discriminator 1)
build/quic/h2o(h2o_http3_send+0x4e)[0x55ff9e] h2o_http3_send at /mydev/h2o/lib/http3/common.c:489
build/quic/h2o(h2o_http3_server_accept+0x1ff)[0x5618ff] h2o_http3_server_accept at /mydev/h2o/lib/http3/server.c:548
build/quic/h2o[0x5601b2] process_packets at /mydev/h2o/lib/http3/common.c:304
build/quic/h2o[0x5603ec] on_read at /mydev/h2o/lib/http3/common.c:373
build/quic/h2o[0x47e632] read_on_ready at /mydev/h2o/lib/common/socket/evloop.c.h:248
build/quic/h2o[0x47e736] run_pending at /mydev/h2o/lib/common/socket/evloop.c.h:547
build/quic/h2o(h2o_evloop_run+0x2f)[0x480d7f] h2o_evloop_run at /mydev/h2o/lib/common/socket/evloop.c.h:602
build/quic/h2o[0x50cab7] run_loop at /mydev/h2o/src/main.c:1763
build/quic/h2o(main+0xbf0)[0x466510] setup_signal_handlers at /mydev/h2o/src/main.c:1571
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fb40676fec5] ?? ??:0
build/quic/h2o[0x46654e] _start at ??:?

32bit or 64bit

 hi!  I`m  jack wu. 

Quicly is a great project.
I recently ported the project to MIPS (32 bit).
Among them, quicly_stream_t *stream = quicly_get_stream (Conn, - (1 + epoch)).
-(1 + epoch) = 0xFFFFFFFF (positive).
Modified to quicly_stream_t *stream = quicly_get_stream (Conn, - ((int64_t) 1 + epoch));
Stream will be equal to NULL.

sampling-based, accumulative event logging

To accomplish that, we need to do the following:

  • move event mask to quicly_conn_t
  • let the application set the event mask when calling quicly_accept or quicly_connect
  • create keyed stash for quicly_conn_t (like pthread_specific or h2o_context_get_handler_context) so that the event logger can accumulate events per connection
  • create an accumulative event logger (with an option to set the maximum buffer size per connection)

t/e2e.t occasionally fails

Especially on Travis CI, emitting a log like below. Happens even after merging #112.

        # {"type":"receive", "time":1552803886277, "conn":0, "dcid":"", "scid":"6180da03f2f407d7", "len":39, "first-octet":237}
        # {"type":"crypto-decrypt", "time":1552803886277, "conn":0, "pn":5, "len":5}
        # {"type":"quictrace-recv", "time":1552803886277, "conn":0, "pn":5, "len":5, "encryptionLevel":2}
        # {"type":"quictrace-recv-ack", "time":1552803886277, "conn":0, "ack-block-begin":2, "ack-block-end":3}
        # {"type":"packet-acked", "time":1552803886277, "conn":0, "pn":2, "newly-acked":1}
        # {"type":"packet-acked", "time":1552803886277, "conn":0, "pn":3, "newly-acked":1}
        # {"type":"stream-acked", "time":1552803886277, "conn":0, "stream-id":-3, "off":0, "len":52}
        # {"type":"quictrace-recv-ack", "time":1552803886277, "conn":0, "ack-delay":0}
        # {"type":"quictrace-cc-ack", "time":1552803886277, "conn":0, "min-rtt":1, "smoothed-rtt":1, "latest-rtt":1, "cwnd":14174, "inflight":48}
        # {"type":"cc-ack-received", "time":1552803886277, "conn":0, "pn":3, "acked-packets":1, "acked-bytes":94, "cwnd":14174, "inflight":48}
        # {"type":"receive", "time":1552803886277, "conn":0, "dcid":"", "len":39, "first-octet":71}
        # {"type":"crypto-decrypt", "time":1552803886277, "conn":0, "pn":6, "len":20}
        # {"type":"quictrace-recv", "time":1552803886277, "conn":0, "pn":6, "len":20, "encryptionLevel":3}
        # {"type":"quictrace-recv-ack", "time":1552803886277, "conn":0, "ack-block-begin":4, "ack-block-end":4}
        # {"type":"packet-acked", "time":1552803886277, "conn":0, "pn":4, "newly-acked":1}
        # {"type":"stream-acked", "time":1552803886277, "conn":0, "stream-id":0, "off":0, "len":14}
        # {"type":"quictrace-recv-ack", "time":1552803886277, "conn":0, "ack-delay":0}
        # {"type":"quictrace-cc-ack", "time":1552803886277, "conn":0, "min-rtt":1, "smoothed-rtt":1, "latest-rtt":1, "cwnd":14222, "inflight":0}
        # {"type":"cc-ack-received", "time":1552803886277, "conn":0, "pn":4, "acked-packets":1, "acked-bytes":48, "cwnd":14222, "inflight":0}
        # {"type":"quictrace-recv-stream", "time":1552803886277, "conn":0, "stream-id":0, "off":0, "len":12, "fin":1}
        # {"type":"stream-receive", "time":1552803886277, "conn":0, "stream-id":0, "off":0, "len":12}
        # {"type":"send", "time":1552803886278, "conn":0, "state":2}
        # {"type":"packet-prepare", "time":1552803886278, "conn":0, "first-octet":64, "dcid":"6180da03f2f407d7"}
        # {"type":"transport-close-send", "time":1552803886278, "conn":0, "error-code":256, "frame-type":0, "reason-phrase":""}
        # {"type":"packet-commit", "time":1552803886278, "conn":0, "pn":7, "len":32, "ack-only":1}
        # {"type":"quictrace-sent", "time":1552803886278, "conn":0, "pn":7, "len":32, "packet-type":3}
        # {"type":"send", "time":1552803886331, "conn":0, "state":2}
        # {"type":"packet-prepare", "time":1552803886331, "conn":0, "first-octet":64, "dcid":"6180da03f2f407d7"}
        # {"type":"transport-close-send", "time":1552803886331, "conn":0, "error-code":256, "frame-type":0, "reason-phrase":""}
        # {"type":"packet-commit", "time":1552803886331, "conn":0, "pn":8, "len":32, "ack-only":1}
        # {"type":"quictrace-sent", "time":1552803886331, "conn":0, "pn":8, "len":32, "packet-type":3}
        # {"type":"send", "time":1552803886332, "conn":0, "state":2}
        # {"type":"packet-prepare", "time":1552803886332, "conn":0, "first-octet":64, "dcid":"6180da03f2f407d7"}
        # {"type":"transport-close-send", "time":1552803886332, "conn":0, "error-code":256, "frame-type":0, "reason-phrase":""}
        # {"type":"packet-commit", "time":1552803886332, "conn":0, "pn":9, "len":32, "ack-only":1}
        # {"type":"quictrace-sent", "time":1552803886332, "conn":0, "pn":9, "len":32, "packet-type":3}
        # {"type":"send", "time":1552803886385, "conn":0, "state":2}
        # {"type":"packet-prepare", "time":1552803886385, "conn":0, "first-octet":64, "dcid":"6180da03f2f407d7"}
        # {"type":"transport-close-send", "time":1552803886385, "conn":0, "error-code":256, "frame-type":0, "reason-phrase":""}
        # {"type":"packet-commit", "time":1552803886385, "conn":0, "pn":10, "len":32, "ack-only":1}
        # {"type":"quictrace-sent", "time":1552803886385, "conn":0, "pn":10, "len":32, "packet-type":3}
        # {"type":"receive", "time":1552803886385, "conn":0, "dcid":"3469897bb2141b075389238d2d1d066c27", "scid":"9926188f5b96", "len":39, "first-octet":225}
        # {"type":"send", "time":1552803886385, "conn":0, "state":3}
        # {"type":"send", "time":1552803886386, "conn":0, "state":3}
        # {"type":"send", "time":1552803886440, "conn":0, "state":3}
        # {"type":"free", "time":1552803886440, "conn":0}

integrity protection method for tokens

We need to consider what algorithm we should use for protecting the integrity of retry tokens. There are at least four candidates.

  • RAND + HKDF + AES-GCM: Uses RAND + HKDF to create a unique key for each input, then use AES-GCM to encrypt the payload of the token. The token will be rand_bytes || AEAD(key: HKDF(rand_bytes), payload: payload, aad: rand_bytes).
  • RAND + AES-CBC + HMAC: Two secrets (one for AES-CBC and one for HMAC) directly cover all the tokens. RAND is used to generate IV.
  • HMAC: do not encrypt the payload, but just sign using HMAC.
  • CMAC: do not encrypt the payload, but just sign using CMAC (possibly based on AES-CBC).

The requirement is that the methods must be faster than the line speed. We need to decrypt the tokens in the incoming Initial packets arriving at line rate (the good news is that they are at least 1,200 bytes long), then send back Retry packets that contain tokens we encrypt. Assuming that the speed of Initial packets is 100Mpps, a server is required to do 200M operations per second.

However, HMAC is slow. You cannot only do about 10M-50M HMAC-256 operations per second per core. That means that we'd be burning 5 to 20 CPU cores just for doing HMAC when receiving Initials in 100Mpps. HKDF-based approach is a no-go, because one HKDF operation involves at least two invocation of HMAC.

Because we need to use the CPU for other purposes as well (receive, decode, encode, send packets!), we might need to consider using CMAC.

Performance: transfer rate decreased over time

Hello,

I ran some simple benchmark test on localhost with the latest master branch using cli, and observed this non-linear Flow Completion Time with 10MB, 20MB and 30MB transfer:

[hoang@localhost quicly]$ time ./cli 127.0.0.1 4444 -p /10000000.txt > /dev/null
packets: received: 7979, sent: 7984, lost: 4, ack-received: 6193, bytes-sent: 283269

real	0m3.406s
user	0m0.092s
sys	0m0.166s
[hoang@localhost quicly]$ time ./cli 127.0.0.1 4444 -p /20000000.txt > /dev/null
packets: received: 15945, sent: 15949, lost: 4, ack-received: 14641, bytes-sent: 552329

real	0m14.463s
user	0m0.263s
sys	0m0.547s
[hoang@localhost quicly]$ time ./cli 127.0.0.1 4444 -p /30000000.txt > /dev/null
packets: received: 23917, sent: 23930, lost: 5, ack-received: 22162, bytes-sent: 845908

real	0m34.505s
user	0m0.478s
sys	0m0.930s

Both client and server are cli on the same machine, I am not sure if this way is relevant for benchmarking.
The perf report on the CPU usage of the server process is as follow:

  Children      Self  Command  Shared Object        Symbol
+   99.90%     0.00%  cli      libc-2.27.so         [.] __libc_start_main                                                                                                                                                         
+   99.90%     0.00%  cli      cli                  [.] main                                                                                                                                                                       
-   99.81%     0.01%  cli      cli                  [.] run_server                                                                                                                                                                 
   - 99.81% run_server                                                                                                                                                                                                             
      - 99.27% quicly_receive                                                                                                                                                                                                      
         - 99.26% handle_payload                                                                                                                                                                                                   
            - 99.24% quicly_sentmap_update                                                                                                                                                                                         
               - 99.23% on_ack_stream                                                                                                                                                                                              
                    99.22% __memmove_sse2_unaligned_erms                                                                                                                                                                           
        0.52% send_pending  

Don't know if it has anything to do with quicly_ranges_add()/substract().

Cheers,
Hoang

Replay Decoding DoS and other thoughts on malicious decoding costs

Sorry for the drive by issue. I haven't personally verified this, but in discussing it with Jana he thought it was worth logging, investigating, and likely fixing.

The concerning scenario is a client - perhaps the actual client, but perhaps a mitm acting as a client - replaying the same legitimate packet on an established connection at a high data rate. The server work per packet might be disproportional to the client - doing both pn decryption and possibly full aead data decryption in order to determine there is no new data in this packet. Theoretically the packet could be considered a duplicate after pn decoding and dropped much earlier. A client that wanted to force more server work would need to increment the pn and then re-encrypt the packet itself where its at least doing a comparable amount of work.

relatedly, we ought to measure what the cost is of finding out that aead decryption failed - though I'm not sure what we could do about it. consider an attacker that did increment the pn but did not re-encrypt the packet body to reflect the new AD. That could also probably be generated at line rate easily. The cost of decryption failure on those packets may well be similar to the above phenomnenon.

Stateless reset detection

There seems to be an issue in the is_stateless_reset function.

  • Offset calculation seems to be wrong per https://tools.ietf.org/html/draft-ietf-quic-transport-19#page-53
  • In the server case conn->super.peer.stateless_reset_token can be uninitialized as the client can only specify it via a NEW_CONNECTION_ID frame which isn't always sent. So in the unfortunate case of receiving a packet zeroed from byte 16 through 39, it will be detected as a reset.

reconsider 75% drop test

In #148, we have tentatively disabled 75% drop tests, because we have started to see the tests fail due to no longer retransmitting packets excessively. Now, the most severe test is 50% drop ones.

However, I think we'd need to have tests that cover a severely congested network. Therefore, listing some thoughts on how we might want to proceed:

  • Instead of using a PRNG that generates each loss event as independent events, we might want to use a loss generator that drops like X of every Y packets.
  • We might want to record the distribution of times being spent for each loss rate. Current approach just calls out test failure if any of the repetitive tests does not finish within a predefined time. That means that it is at the moment impossible to detect mild regressions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.