GithubHelp home page GithubHelp logo

igrigorik / http-2 Goto Github PK

View Code? Open in Web Editor NEW
885.0 46.0 63.0 15.23 MB

Pure Ruby implementation of HTTP/2 protocol

Home Page: https://httpwg.github.io/specs/rfc7540.html

License: MIT License

Ruby 100.00%

http-2's Introduction

HTTP-2

Gem Version Build status

Pure Ruby, framework and transport agnostic, implementation of HTTP/2 protocol and HPACK header compression with support for:

Protocol specifications:

Getting started

$> gem install http-2

This implementation makes no assumptions as how the data is delivered: it could be a regular Ruby TCP socket, your custom eventloop, or whatever other transport you wish to use - e.g. ZeroMQ, avian carriers, etc.

Your code is responsible for feeding data into the parser, which performs all of the necessary HTTP/2 decoding, state management and the rest, and vice versa, the parser will emit bytes (encoded HTTP/2 frames) that you can then route to the destination. Roughly, this works as follows:

require 'http/2'

socket = YourTransport.new

conn = HTTP2::Client.new
conn.on(:frame) {|bytes| socket << bytes }

while bytes = socket.read
 conn << bytes
end

Checkout provided client and server implementations for basic examples.

Connection lifecycle management

Depending on the role of the endpoint you must initialize either a Client or a Server object. Doing so picks the appropriate header compression / decompression algorithms and stream management logic. From there, you can subscribe to connection level events, or invoke appropriate APIs to allocate new streams and manage the lifecycle. For example:

# - Server ---------------
server = HTTP2::Server.new

server.on(:stream) { |stream| ... } # process inbound stream
server.on(:frame)  { |bytes| ... }  # encoded HTTP/2 frames

server.ping { ... } # run liveness check, process pong response
server.goaway # send goaway frame to the client

# - Client ---------------
client = HTTP2::Client.new
client.on(:promise) { |stream| ... } # process push promise

stream = client.new_stream # allocate new stream
stream.headers({':method' => 'post', ...}, end_stream: false)
stream.data(payload, end_stream: true)

Events emitted by the connection object:

:promise client role only, fires once for each new push promise
:stream server role only, fires once for each new client stream
:frame fires once for every encoded HTTP/2 frame that needs to be sent to the peer

Stream lifecycle management

A single HTTP/2 connection can multiplex multiple streams in parallel: multiple requests and responses can be in flight simultaneously and stream data can be interleaved and prioritized. Further, the specification provides a well-defined lifecycle for each stream (see below).

The good news is, all of the stream management, and state transitions, and error checking is handled by the library. All you have to do is subscribe to appropriate events (marked with ":" prefix in diagram below) and provide your application logic to handle request and response processing.

                      +--------+
                 PP   |        |   PP
             ,--------|  idle  |--------.
            /         |        |         \
           v          +--------+          v
    +----------+          |           +----------+
    |          |          | H         |          |
,---|:reserved |          |           |:reserved |---.
|   | (local)  |          v           | (remote) |   |
|   +----------+      +--------+      +----------+   |
|      | :active      |        |      :active |      |
|      |      ,-------|:active |-------.      |      |
|      | H   /   ES   |        |   ES   \   H |      |
|      v    v         +--------+         v    v      |
|   +-----------+          |          +-----------+  |
|   |:half_close|          |          |:half_close|  |
|   |  (remote) |          |          |  (local)  |  |
|   +-----------+          |          +-----------+  |
|        |                 v                |        |
|        |    ES/R    +--------+    ES/R    |        |
|        `----------->|        |<-----------'        |
| R                   | :close |                   R |
`-------------------->|        |<--------------------'
                      +--------+

For sake of example, let's take a look at a simple server implementation:

conn = HTTP2::Server.new

# emits new streams opened by the client
conn.on(:stream) do |stream|
  stream.on(:active) { } # fires when stream transitions to open state
  stream.on(:close)  { } # stream is closed by client and server

  stream.on(:headers) { |head| ... } # header callback
  stream.on(:data) { |chunk| ... }   # body payload callback

  # fires when client terminates its request (i.e. request finished)
  stream.on(:half_close) do

    # ... generate_response

    # send response
    stream.headers({
      ":status" => 200,
      "content-type" => "text/plain"
    })

    # split response between multiple DATA frames
    stream.data(response_chunk, end_stream: false)
    stream.data(last_chunk)
  end
end

Events emitted by the Stream object:

:reserved fires exactly once when a push stream is initialized
:active fires exactly once when the stream become active and is counted towards the open stream limit
:headers fires once for each received header block (multi-frame blocks are reassembled before emitting this event)
:data fires once for every DATA frame (no buffering)
:half_close fires exactly once when the opposing peer closes its end of connection (e.g. client indicating that request is finished, or server indicating that response is finished)
:close fires exactly once when both peers close the stream, or if the stream is reset
:priority fires once for each received priority update (server only)

Prioritization

Each HTTP/2 stream has a priority value that can be sent when the new stream is initialized, and optionally reprioritized later:

client = HTTP2::Client.new

default_priority_stream = client.new_stream
custom_priority_stream = client.new_stream(priority: 42)

# sometime later: change priority value
custom_priority_stream.reprioritize(32000) # emits PRIORITY frame

On the opposite side, the server can optimize its stream processing order or resource allocation by accessing the stream priority value (stream.priority).

Flow control

Multiplexing multiple streams over the same TCP connection introduces contention for shared bandwidth resources. Stream priorities can help determine the relative order of delivery, but priorities alone are insufficient to control how the resource allocation is performed between multiple streams. To address this, HTTP/2 provides a simple mechanism for stream and connection flow control.

Connection and stream flow control is handled by the library: all streams are initialized with the default window size (64KB), and send/receive window updates are automatically processed - i.e. window is decremented on outgoing data transfers, and incremented on receipt of window frames. Similarly, if the window is exceeded, then data frames are automatically buffered until window is updated.

The only thing left is for your application to specify the logic as to when to emit window updates:

conn.buffered_amount     # check amount of buffered data
conn.window              # check current window size
conn.window_update(1024) # increment connection window by 1024 bytes

stream.buffered_amount     # check amount of buffered data
stream.window              # check current window size
stream.window_update(2048) # increment stream window by 2048 bytes

Server push

An HTTP/2 server can send multiple replies to a single client request. To do so, first it emits a "push promise" frame which contains the headers of the promised resource, followed by the response to the original request, as well as promised resource payloads (which may be interleaved). A simple example is in order:

conn = HTTP2::Server.new

conn.on(:stream) do |stream|
  stream.on(:headers) { |head| ... }
  stream.on(:data) { |chunk| ... }

  # fires when client terminates its request (i.e. request finished)
  stream.on(:half_close) do
    promise_header = { ':method' => 'GET',
                       ':authority' => 'localhost',
                       ':scheme' => 'https',
                       ':path' => "/other_resource" }

    # initiate server push stream
    push_stream = nil
    stream.promise(promise_header) do |push|
      push.headers({...})
      push_stream = push
    end

    # send response
    stream.headers({
      ":status" => 200,
      "content-type" => "text/plain"
    })

    # split response between multiple DATA frames
    stream.data(response_chunk, end_stream: false)
    stream.data(last_chunk)

    # now send the previously promised data
    push_stream.data(push_data)
  end
end

When a new push promise stream is sent by the server, the client is notified via the :promise event:

conn = HTTP2::Client.new
conn.on(:promise) do |push|
  # process push stream
end

The client can cancel any given push stream (via .close), or disable server push entirely by sending the appropriate settings frame:

client.settings(settings_enable_push: 0)

Specs

To run specs:

rake

License

(MIT License) - Copyright (c) 2013-2019 Ilya Grigorik GA (MIT License) - Copyright (c) 2019 Tiago Cardoso GA

http-2's People

Contributors

alextwoods avatar alloy avatar aried3r avatar bengreenberg avatar byronformwalt avatar cjyclaire avatar danielmorrison avatar dawidpieper avatar dm1try avatar ferdinandrosario avatar georgeu2000 avatar honeyryderchuck avatar igrigorik avatar invisiblefunnel avatar ioquatix avatar jpfuentes2 avatar jzzocc avatar kapcod avatar kenichi avatar mkauf avatar msroot avatar mullermp avatar okkez avatar phluid61 avatar sankarcheppali avatar southwolf avatar tamird avatar tatsuhiro-t avatar thedrow avatar xiejiangzhi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

http-2's Issues

Client: Upgrade protocol

Not as much an issue, but a currently non-existing feature request with a possible solution. The upgrade_server.rb example has been tested using nghttp and adds an #upgrade method to HTTP2::Server. A matching method doesn't exist for the client, which is, granted, not such a relevant feature. But I've been needing an implementation to test my workflow, and have come up with the following solution:

def upgrade(data: )
  @connection.send_connection_preface
  stream = @connection.new_stream
  stream.on(:headers) ...
  @connection << data if data
end

This method is called after an established "HTTP1 upgrade to H2C" session socket is already around. Connection preface is sent immediately, as by the spec (although the spec states that the connection is therefore in half_closed mode, not sure how much of it internally is like that). After, stream 1 is created, along with the callbacks. The last line are frame remaining in the http1 socket buffer (I've been using net/http in my tests, and it makes it impossible to access the buffer using public API).

Do you think that such a think is worth to be added, or maybe documented in an example (granted, if I make it work with the example upgrade server :) )?

promise: using the same callback both for request as response headers

I've just noticed that, when trying to read promises from a client, the same callback is used both for when the promise arrives, and when the response headers come:

# example/client.rb
conn.on(:promise) do |promise|
  promise.on(:headers) do |h|
    # will get the push headers AND the response headers
    # "promise headers:  [[":method",  "GET"]....
    # "promise headers:  [[":status",  "200"]....
    log.info "promise headers: #{h}"
  end
end

I'd propose to solve this by sending the request headers in the same callback, which will be more straightforward:

conn.on(:promise) do |promise, request_headers|
  promise.on(:headers) do |response_headers|
...

@igrigorik what do you think? is this feasible?

Example with priorities

There is any example with HTTP/2 in Ruby serving a HTML file with prioritization of resources?

Failing client requests

I'm using the latest published gem (0.9.0).

I'm working on an asynchronous implementation which has feature parity with HTTP1 (connection per request and sequential connections using keep-alive). I'm working on the HTTP2 implementation which for the most part appears to work except the functionality depends on when I start the read loop.

Here are two dumps from my client:

Sent frame: {:type=>:settings, :stream=>0, :payload=>[[:settings_max_concurrent_streams, 100]]}
Sent frame: {:type=>:headers, :flags=>[:end_headers, :end_stream], :payload=>{":method"=>"GET", ":path"=>"/", ":scheme"=>"https", ":authority"=>"www.codeotaku.com", "accept"=>"*/*", "user-agent"=>"nghttp2/1.30.0"}, :stream=>1}

-- start read loop here

Received frame: {:length=>18, :type=>:settings, :flags=>[], :stream=>0, :payload=>[[:settings_max_concurrent_streams, 128], [:settings_initial_window_size, 65536], [:settings_max_frame_size, 16777215]]}
Sent frame: {:type=>:settings, :stream=>0, :payload=>[], :flags=>[:ack]}
Received frame: {:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>2147418112}
Received frame: {:length=>0, :type=>:settings, :flags=>[:ack], :stream=>0, :payload=>[]}

Received frame: {:length=>160, :type=>:headers, :flags=>[:end_headers], :stream=>1, :payload=>"H\x03301\\\x010\x00\x84BF\x9BQ\x90d\x01SA\xFB\x96E5\x96\xCAGQjM\x1E\xBF\x00\x86\xA0\xE4\x1ALz\xBF\x85`\xD5H_?\x00\x89 \xC99V!\xEAM\x87\xA3\x8A\xA4~V\x1C\xC5\x81\xE7\x1A\x00?\x00\x83\x90i/\x96\xDFi~\x94\x10\x14\xD0;\x14\x10\x02\xF2\x80f\xE3-\xDCi\xE51h\xDF\x00\x89\xF2\xB5g\xF0[\v\"\xD1\xFA\x91\xD7=\xA81\xEASX\xD0\x82\xD51lQ\xB5\xC2\xB8\x7F\x00\x85Al\xEE[?\x9C\xAAcU\xE5\x80\xAE\x10\xAE\xFA\x9F\xEDMs\xDA\x83\x1E\xA55\x8D\b-S\x16\xC5\e\\+\x87"}
Received frame: {:length=>0, :type=>:data, :flags=>[:end_stream], :stream=>1, :payload=>""}

It appears that depending on when the read loop is started (i.e. before new_stream or after) changes the behaviour of the client connection, and in this case it fails:

-- start read loop here

Received frame: {:length=>18, :type=>:settings, :flags=>[], :stream=>0, :payload=>[[:settings_max_concurrent_streams, 128], [:settings_initial_window_size, 65536], [:settings_max_frame_size, 16777215]]}
Sent frame: {:type=>:settings, :stream=>0, :payload=>[], :flags=>[:ack]}
Received frame: {:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>2147418112}
Sent frame: {:type=>:headers, :flags=>[:end_headers, :end_stream], :payload=>{":scheme"=>"https", ":method"=>"GET", ":path"=>"/", ":authority"=>"www.codeotaku.com", "accept"=>"*/*", "user-agent"=>"spider"}, :stream=>1}
Received frame: {:length=>8, :type=>:goaway, :flags=>[], :stream=>0, :last_stream=>0, :error=>:protocol_error}

I'm not absolutely certain what the issue is yet but I thought I'd report my initial findings. I'll follow up with more details as they become available.

@streams_recently_closed update failing under multiple threads

I'm referring specifically to this snippet, which under certain circumstances (multiple requests arrive and are scheduled to different threads) causes the following error:

E, [2017-09-17T20:17:15.872990 #23156] ERROR -- : RuntimeError: can't add a new key into hash during iteration
E, [2017-09-17T20:17:15.873078 #23156] ERROR -- : /Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/connection.rb:662:in `block in activate_stream'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/emitter.rb:22:in `block in once'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/emitter.rb:34:in `block in emit'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/emitter.rb:33:in `delete_if'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/emitter.rb:33:in `emit'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/stream.rb:567:in `complete_transition'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/stream.rb:610:in `manage_state'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/flow_buffer.rb:55:in `send_data'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/stream.rb:133:in `send'
/Projects/palanca/.bundle/ruby/2.3.0/gems/http-2-0.8.4/lib/http/2/stream.rb:193:in `data'

Now, one can try to rewrite that particular snippet, but I'd say that, under these circumstances, it's hard not to replace that bit and introduce a race condition, at least without adding a lock.

My question would be more, since the public API here is very limited (#<< and #headers and #data) and one interacts more through callbacks which are indirectly called, should one at best handle all requests from one connection in the same thread, or could one get away with locking a few (if not all) of those public API calls?

Currently as it is, a connection cannot be shared across threads due to the error described above though.

Too many WINDOW_UPDATE frames generated by the client

I've created this small client implementation, also using http-2, to test the server. I now am testing the case "GET a file", and I'm using a 10Mb file for that. I'm now getting timeout requests more than occasionally, and when I don't, I get response times up to 3x longer than a similar HTTP/1 GET.

Inspecting the frames and digging further, it looks like the client is generating an awful lot of WINDOW_UPDATE frames. I think that they're an awful lot, as the same request using nghttp generates way less:

$ nghttp -vu http://127.0.0.1:41203 | grep WINDOW_UPDATE | wc -l
      66
$ bundle exec ruby -Itest my_test.rb 2>&1 | grep client | grep window_update | wc -l
  101912

Inspecting the frames further, I see a discrepancy in the WINDOW_UPDATE frames received by the server when using both examples. Here's a small sample from the first 10 for each case:

# for nghttp
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>32959}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>32959}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>32839}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>32839}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>32856}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>32856}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>32769}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>32769}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>32769}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>32769}
...

# for http-2 client
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>40}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>40}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>20}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>20}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>312}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>312}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>58}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>58}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>22}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>22}
{:length=>4, :type=>:window_update, :flags=>[], :stream=>0, :increment=>83}
...

It definitely looks like the calculation of window size for the client scenario has something fishy.

Question: how to pass in data > 65,535 bytes

Hi Ilya,
I'm trying to understand how to pass in data > 65,535 bytes.

When I issue the following response from a server:

conn.on(:stream) do |stream|

  # [...]

  body = "a" * 100_000
  headers = { ':content-length' => 100_000 } # snippet
  stream.headers(headers, end_stream: false)
  stream.data(body, end_stream: true)
end

Then on the receiving end (the client) I have:

stream.on(:data) do |data|
  p data.length
end

Which prints out:

16384
16384
16384
16383

which sums up to 65,535, the default window size, after which my client hangs waiting for more data (that never comes).

I have read about flow control but I'm not understanding if I'm supposed to change the window size on the fly (on the server?), or if I need to create a second stream with the remaining size of the data. If it's the first option, the README file states to use:

stream.window_update(2048)

which is however an undefined method 'window_update' for #<HTTP2::Stream:0x007fd9cd18f548>, or a:

conn.settings(streams: 100, window: Float::INFINITY)

which raises an HTTP2::Error::CompressionError: Unknown settings ID for error.

Can you please advise? Thank you.

headers could be symbols?

Is it possible that we can apply #to_s to header keys? This allows us to use symbols rather than strings.

WebMock adapter

it seems like there is no WebMock adapter for this gem yet. Does anyone out there has something in the making?

Background: We are using RSpec with VCR and can't record http-2 connections

NameError: uninitialized constant HTTP2::Server::Base64

Here:

buf = HTTP2::Buffer.new Base64.urlsafe_decode64(settings.to_s)

I guess this has only been tested with the upgrade_server.rb example, as the require is being done there.

You can require the base64 lib in the server file and be done with it. However, you can also get away by unpacking the string, thereby bypassing require:

require "base64"
Base64.decode64(base64str)
# equals
base64str.unpack("m*")[0]

Implement http2-12 and compression-07

I created a fork and started working on HPACK-07.
I've just finished Huffman encoder/decoder and now working on other parts of HPACK.

I think I can create a PR but HPACK-07 is useful only after h2-12 is complete.

Update rubocop to 0.48.1

We found a potential security vulnerability in one of your dependencies.
The rubocop dependency defined in Gemfile has a known low severity security vulnerability in version range < 0.48.1 and should be updated.

Nginx for Rack apps?

Hi Ilya,

Just looking through: tenderlove/the_metal#5 ...

Do you consider Nginx (with alfa-release of HTTP/2 as of today) as a proxy within development mode? There is a project in proof-of-concept phase (https://github.com/ngxenv/ngxenv) that can be used locally without any dependencies, pure bash code (like Rbenv), should work on Linux/OSX. Not sure if anyone need it to be honest, it solved my particular task to have Nginx locally and do A/B testing before upgrade Nginx on production servers. It compiles in tmp directory which is git-ignored by default, and stores its own binary individually, so there is no mess with Homebrew during versions upgrade or trying specific module in use (especially SPDY or HTTP/2 support are separates module). What you think?

Anatoly

Client failing to GET https://nghttp2.org

While trying to debug the other issue reported, I've realized that I can't request a publicly accessible website with the simple client (in this case, https://nghttp2.org). Just try the example client:

>  ruby example/client.rb https://nghttp2.org
Sending HTTP 2.0 request
Sent frame: {:type=>:settings, :stream=>0, :payload=>[[:settings_max_concurrent_streams, 100]]}
Sent frame: {:type=>:headers, :flags=>[:end_headers, :end_stream], :payload=>[[":scheme", "https"], [":method", "GET"], [":authority", "nghttp2.org:443"], [":path", ""], ["accept", "*/*"]], :stream=>1}
[Stream 1]: closing client-end of the stream
Received frame: {:length=>18, :type=>:settings, :flags=>[], :stream=>0, :payload=>[[:settings_max_concurrent_streams, 100], [:settings_initial_window_size, 1048576], [:settings_header_table_size, 8192]]}
Sent frame: {:type=>:settings, :stream=>0, :payload=>[], :flags=>[:ack]}
Received frame: {:length=>0, :type=>:settings, :flags=>[:ack], :stream=>0, :payload=>[]}
Received frame: {:length=>4, :type=>:rst_stream, :flags=>[], :stream=>1, :error=>:protocol_error}
[Stream 1]: stream closed

The difference I see to the frame output from nghttpis that the latter does the whole "connection dance" before sending the headers frame (send/receive settings and ack, send priorities...).

Server doesn't work with nghttp2

If I run the Ruby server, then try to make a request with nghttp2 like in the example readme, the request fails:

Server window:

[aaron@TC example (master)]$ ruby server.rb
Starting server on port 8080
New TCP connection!
Exception: HTTP2::Error::HandshakeError, HTTP2::Error::HandshakeError - closing socket.

Client window:

[aaron@TC example (master)]$ nghttp -vnu http://localhost:8080
[  0.001] Connected
[  0.001] HTTP Upgrade request
GET / HTTP/1.1
Host: localhost:8080
Connection: Upgrade, HTTP2-Settings
Upgrade: h2c-14
HTTP2-Settings: AAMAAABkAAQAAP__
Accept: */*
User-Agent: nghttp2/0.7.13


Some requests were not processed. total=1, processed=0

AFAICT, the state is waiting_magic, but it doesn't recognize the header sent from nghttp2. I'll continue to investigate, but I thought I'd post here!

Errors eaten by lib when running threaded

Hello,
First of all - thank you for providing the first ruby HTTP/2 library.

I'm trying to implement a realistic client example that uses your library. The one you provide is great, however it is missing the practical scenario of having a client socket continuously listening for incoming responses and events (to my understanding, basically HTTP/2 requires 2-way sockets), while still allowing library users to send out frames.

One possible choice is to have a thread dedicated to listening, so I have coded a very simple example on how to do so. However, any errors that are generated in the thread appear to be eaten up by the library.

Try this code:

require 'socket'
require 'uri'
require 'http/2'

class Client

  def initialize
    @uri = URI.parse("http://106.186.112.116")

    @socket      = nil
    @read_thread = nil

    ensure_socket
  end

  def get
    headers = {
      ':scheme' => @uri.scheme,
      ':method' => 'GET',
      ':path'   => "/",
      'host'    => @uri.host
    }
    stream  = h2.new_stream

    ensure_socket

    stream.headers(headers, end_stream: true)

    receive
  end

  def ensure_socket
    return if @socket

    @socket = TCPSocket.new(@uri.host, @uri.port)

  rescue SocketError => e
    puts "Could not connect to socket (#{e.class} exception: #{e.message})"
    close
  end

  def h2
    @h2 ||= HTTP2::Client.new.tap do |h2|
      h2.on(:frame) do |bytes|
        puts "Sending bytes: #{bytes.unpack("H*").first}"
        @socket.print bytes
        @socket.flush
      end

      h2.on(:frame_sent) do |frame|
        puts "Sent frame: #{frame.inspect}"
      end

      h2.on(:frame_received) do |frame|
        puts "Received frame: #{frame.inspect}"

        # raise "EXPLODE HERE"
      end
    end
  end

  def receive
    return if @read_thread

    @read_thread = Thread.new do
      loop do
        begin
          data = @socket.read_nonblock(1024)
        rescue Errno::EAGAIN
          retry
        end

        begin
          h2 << data
        rescue => e
          puts "#{e.class} exception: #{e.message} - closing socket."
          e.backtrace.each { |l| puts "\t" + l }
          close
        end

        break unless @socket && !@socket.closed? && !@socket.eof?
      end

      close
    end

    @read_thread.abort_on_exception = true
  end

  def close
    @socket.close if @socket && !@socket.closed?
    @read_thread.exit if @read_thread

    @socket      = nil
    @h2          = nil
    @read_thread = nil
  end
end

c = Client.new
c.get

while true
  sleep 1
end

Run it as is, and this will issue a GET request to the http://106.186.112.116 test server. Everything should just work, and you will see the response frames. You can also issue multiple c.get instructions, and they all will work.

Now, uncomment the line # raise "EXPLODE HERE" and run again. You will see the following:

HTTP2::Error::ProtocolError exception: HTTP2::Error::ProtocolError - closing socket.
    /Users/roberto/.rvm/gems/ruby-2.3.0@apnotic/gems/http-2-0.8.0/lib/http/2/connection.rb:650:in `connection_error'
    /Users/roberto/.rvm/gems/ruby-2.3.0@apnotic/gems/http-2-0.8.0/lib/http/2/connection.rb:324:in `rescue in receive'
    /Users/roberto/.rvm/gems/ruby-2.3.0@apnotic/gems/http-2-0.8.0/lib/http/2/connection.rb:163:in `receive'
    test.rb:75:in `block (2 levels) in receive'
    test.rb:67:in `loop'
    test.rb:67:in `block in receive'

Obviously this isn't the error that one would like to see in the logs.

Do you have any recommendations to fix this, or an example of a realistic client that can keep on listening for server events?

Thank you,
r.

Server-Push: how to properly not push when user has a resource cached

I'm building an API for server-push. It works if I just push 200+data, but this seems inefficient for when the user already has the resource in the cache. HTTP1 has the Etag/Last-Modified mechanism, but I don't know how to build the same behaviour for HTTP2/server-push. Specifically I've been looking at this documentation and this example. I'm using Chrome 54, and analysing the http headers with the developer tools doesn't show me any link header. Am I missing something?

I've built an example to demonstrate. The push itself is not working as expected, as the response headers are sent, push headers are sent, response data is sent, push data is sent, but the first thing I get after is a request for the same resource I just pushed. I'm assuming I'm misusing the API?
But in the end, the intended goal of the example would be to push the css file, and send 304 if it is cached in the client. How to best do it?

Closing a connection: how to safely close

The examples are not very specific on this, so I have to ask: how should one know for sure that it's safe to terminate the socket on the server side, or inform the server to terminate on the client side?

The way I see it, and since there is already a callback for that, one has to:

  • client: send a goaway frame and terminate the socket
  • server: set a on(:goaway) callback , and terminate the socket.

Is such an usage deviating from the http2 spec?

memory leak

Regarding ostinelli/net-http2#7, I was able to reproduce the memory leak outside of net-http2 with a separate client, so the leak appears to be with http-2.

My class for sending messages is below, though even with the hack to delete @listeners it still leaks. (It's been a while since I looked at this code, so I'm guessing that made it leak less.) In production sending a handful of messages every second over a persistent connection (Apple's APNS servers), the process balloons to 512mb in about 7 hours (which triggers it to be restarted).

This is with Ruby 2.24p230 on OpenBSD. Any help troubleshooting this would be appreciated.

require "http/2"
require "openssl"
require "resolv"
require "ostruct"

# monkeypatch to free up memory
module HTTP2
  module Emitter
    def delete_listeners
      if @listeners
        @listeners.each do |k,v|
          @listeners.delete(k)
        end
      end

      @listeners = nil
    end
  end
end

class HTTP2Client
  attr_accessor :hostname, :ssl_context, :ssl_socket, :tcp_socket, :h2_client,
    :h2_stream, :headers, :body, :done

  DRAFT = "h2"

  def initialize(hostname, ip = nil, ctx = nil)
    if !ctx
      ctx = OpenSSL::SSL::SSLContext.new
    end

    ctx.npn_protocols = [ DRAFT ]
    ctx.npn_select_cb = lambda do |protocols|
      DRAFT if protocols.include?(DRAFT)
    end

    self.ssl_context = ctx

    if !ip
      ip = Resolv.getaddress(hostname)
    end

    begin
      Timeout.timeout(10) do
        self.tcp_socket = TCPSocket.new(ip, 443)

        sock = OpenSSL::SSL::SSLSocket.new(self.tcp_socket, self.ssl_context)
        sock.sync_close = true
        sock.hostname = hostname
        sock.connect

        self.ssl_socket = sock
      end
    rescue Timeout::Error
      self.log(:error, nil, "timed out connecting to #{host[:ip]}")
      self.store_client(nil, dev)
      return nil
    end

    self.hostname = hostname

    self.h2_client = HTTP2::Client.new
    self.h2_client.on(:frame) do |bytes|
      self.ssl_socket.print bytes
      self.ssl_socket.flush
      nil
    end
  end

  def call(method, path, options = {})
    if self.h2_stream
      self.cleanup_stream
    end

    headers = (options[:headers] || {})
    headers.merge!({
      ":scheme" => "https",
      ":method" => method.to_s.upcase,
      ":path" => path,
    })

    headers.merge!("host" => self.hostname)

    if options[:body]
      headers.merge!("content-length" => options[:body].bytesize.to_s)
    else
      headers.delete("content-length")
    end

    self.h2_stream = self.h2_client.new_stream

    self.headers = {}
    self.h2_stream.on(:headers) do |hs_array|
      hs = Hash[*hs_array.flatten]
      self.headers.merge!(hs)
      nil
    end

    self.body = ""
    self.h2_stream.on(:data) do |d|
      self.body << d
      nil
    end

    self.done = false
    self.h2_stream.on(:close) do |d|
      self.done = true
      nil
    end

    if options[:body]
      self.h2_stream.headers(headers, :end_stream => false)
      self.h2_stream.data(options[:body], :end_stream => true)
    else
      self.h2_stream.headers(headers, :end_stream => true)
    end

    while !self.ssl_socket.closed? && !self.ssl_socket.eof?
      data = self.ssl_socket.read_nonblock(1024)

      begin
        self.h2_client << data
      rescue => e
        self.ssl_socket.close
        raise e
      end

      if self.done
        break
      end
    end

    if self.done
      self.cleanup_stream
      return OpenStruct.new(:status => self.headers[":status"],
        :headers => self.headers, :body => self.body)
    else
      return nil
    end
  end

  def close
    begin
      if self.h2_client
        self.h2_client.close
      end
    rescue
    end

    begin
      if self.ssl_socket
        self.ssl_socket.close
      end
    rescue
    end

    begin
      if self.tcp_socket
        self.tcp_socket.close
      end
    rescue
    end
  end

  def cleanup_stream
    if self.h2_stream
      self.h2_stream.delete_listeners
    end

    begin
      self.h2_stream.close
    rescue
    end
  end
end

HTTP2::Error::ProtocolError

Hi,

First, let me start off by saying thank you for working on this awesome gem! By having it be written in pure Ruby, this means C extension issues are out of the question 🙏 👏

Unfortunately, I have run into a bit of a speed bump with http-2. I am trying to add support for it (via https://github.com/mironrb/miron/pull/28) to Miron, a gem I am working on as a kind of Rack 2.0. I am trying to get the server running, but when I try to load up the page with the server running from Chrome on my Mac, I encounter a HTTP2::Error::ProtocolError.

To reproduce:

  • Clone down branch from PR
  • bundle, etc.
  • cd examples
  • ../exe/miron server --mironfile=app.rb --handler=http2 --port=8080

Let me know if I can provide anything else to try and debug this, and find the source of the error.

Does http-2 work with ruby-1.9.3 ?

When I try to run require 'http/2' in ruby-1.9.3 I get the following error:

SyntaxError: /home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:97: syntax error, unexpected tPOW, expecting ')'
def initialize(**options)
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:303: dynamic constant assignment
HEADREP = {
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:313: dynamic constant assignment
NAIVE = { index: :never, huffman: :never }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:314: dynamic constant assignment
LINEAR = { index: :all, huffman: :never }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:315: dynamic constant assignment
STATIC = { index: :static, huffman: :never }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:316: dynamic constant assignment
SHORTER = { index: :all, huffman: :never }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:317: dynamic constant assignment
NAIVEH = { index: :never, huffman: :always }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:318: dynamic constant assignment
LINEARH = { index: :all, huffman: :always }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:319: dynamic constant assignment
STATICH = { index: :static, huffman: :always }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:320: dynamic constant assignment
SHORTERH = { index: :all, huffman: :shorter }.freeze
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:323: class definition in method body
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:325: syntax error, unexpected tPOW, expecting ')'
def initialize(**options)
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:465: class definition in method body
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:467: syntax error, unexpected tPOW, expecting ')'
def initialize(**options)
^
/home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2/compressor.rb:555: syntax error, unexpected keyword_end, expecting $end
from /home/lowks/.rvm/gems/ruby-1.9.3-p551/bundler/gems/http-2-025812598a71/lib/http/2.rb:8:in `require'

Naming of gem

This gem is awesome. Sorry for being super pedantic, but why not just call it http2?

It would make more sense.

require 'http2' would import a module called HTTP2.

No need to have 2.rb and http/2/*.rb.

I think it's fun that you can write require 'http/2', but it doesn't match up with the name of the module, nor is it typical.

Thoughts?

Header Compression Issue (COMPRESSION_ERROR)

I'm having a hard-to-reproduce issue which has to do with the state of the headers in the browser, but not in other clients. Namely, the browser isn't being able to decompress the headers after a certain point in an http2 connection/session. Specifically, I'm receiving a goaway frame from the browser with COMPRESSION_ERROR type and "Framer error: 6 (DECOMPRESS_FAILURE)". The browser is Chrome 56.0.2924.87 (64-bit), and this has been tested against the master branch of http-2.

The server log is the following (only logging the goaway as it happens):

 - - - [20/Mar/2017:15:17:27 +0100] "GET / 2.0" 200 - 1.7691
- - - [20/Mar/2017:15:17:27 +0100] "GET /leaflet/dist/leaflet.css?body=1 2.0" 200 10162 0.0298
- - - [20/Mar/2017:15:17:27 +0100] "GET /cart.css?body=1 2.0" 200 4186 0.0167
- - - [20/Mar/2017:15:17:27 +0100] "GET /auth_forms.css?body=1 2.0" 200 1874 0.0327
- - - [20/Mar/2017:15:17:27 +0100] "GET /application.css?body=1 2.0" 200 874 0.0791
- - - [20/Mar/2017:15:17:27 +0100] "GET /home.css?body=1 2.0" 200 365445 0.0978
- - - [20/Mar/2017:15:17:27 +0100] "GET /orders.css?body=1 2.0" 200 119 0.0924
- - - [20/Mar/2017:15:17:27 +0100] "GET /pizzas.css?body=1 2.0" 200 - 0.1002
- - - [20/Mar/2017:15:17:27 +0100] "GET /registrations.css?body=1 2.0" 200 - 0.0689
- - - [20/Mar/2017:15:17:27 +0100] "GET /jquery.js?body=1 2.0" 200 293431 0.0660
- - - [20/Mar/2017:15:17:27 +0100] "GET /jquery_ujs.js?body=1 2.0" 200 21600 0.0658
- - - [20/Mar/2017:15:17:27 +0100] "GET /bootstrap.js?body=1 2.0" 200 120 0.0673
- - - [20/Mar/2017:15:17:27 +0100] "GET /bootstrap/alert.js?body=1 2.0" 200 2261 0.0666
- - - [20/Mar/2017:15:17:27 +0100] "GET /bootstrap/tooltip.js?body=1 2.0" 200 16346 0.0670
- - - [20/Mar/2017:15:17:27 +0100] "GET /bootstrap/popover.js?body=1 2.0" 200 3164 0.0633
received goaway: {:length=>45, :type=>:goaway, :flags=>[], :stream=>0, :last_stream=>0, :error=>:compression_error, :payload=>"Framer error: 6 (DECOMPRESS_FAILURE)."}
- - - [20/Mar/2017:15:17:27 +0100] "GET /bootstrap/dropdown.js?body=1 2.0" 200 4726 0.0713
- - - [20/Mar/2017:15:17:27 +0100] "GET /bootstrap/collapse.js?body=1 2.0" 200 5964 0.0759
- - - [20/Mar/2017:15:17:27 +0100] "GET /simplecart-js/simpleCart.js?body=1 2.0" 200 57255 0.0725
- - - [20/Mar/2017:15:17:27 +0100] "GET /leaflet/dist/leaflet.js?body=1 2.0" 200 125412 0.0747
- - - [20/Mar/2017:15:17:27 +0100] "GET /home.js?body=1 2.0" 200 149 0.0733
- - - [20/Mar/2017:15:17:27 +0100] "GET /cart.js?body=1 2.0" 200 927 0.0646
- - - [20/Mar/2017:15:17:27 +0100] "GET /orders.js?body=1 2.0" 200 1 0.0643
- - - [20/Mar/2017:15:17:27 +0100] "GET /pizzas.js?body=1 2.0" 200 327 0.0508
- - - [20/Mar/2017:15:17:28 +0100] "GET /registrations.js?body=1 2.0" 200 149 0.0434
- - - [20/Mar/2017:15:17:28 +0100] "GET /application.js?body=1 2.0" 200 600 0.0548

Chrome presents the status of subsequent requests/streams after goaway as failed in the Network tab, which probably means that it ignores subsequent streams from that connection.

I tried to use nghttp to debug this, and can't seem to reproduce the issue, as nghttp successfully loads page + assets (I've been testing with the -a option), no decompress failure.

Debugging it further, I suspect that this has to do with the header table size, and how the state is shared between client and server. Analyzing the settings stream from both nghttp and chrome shows different parameters being shared:

# from Chrome
#=> server settings: {:stream=>0, :payload=>[[:settings_max_concurrent_streams, 100]]}
#=> chrome settings: {:flags=>[], :stream=>0, :payload=>[[:settings_max_concurrent_streams, 1000], [:settings_initial_window_size, 6291456], [:settings_header_table_size, 65536]]}

# from nghttp
#=> server settings: {:stream=>0, :payload=>[[:settings_max_concurrent_streams, 100]]}
#=> nghttp settings: {:flags=>[], :stream=>0, :payload=>[[:settings_max_concurrent_streams, 100], [:settings_initial_window_size, 65535]]}

The supplied application send a fair amount of "dynamic" headers (x-runtime, x-request-id, etag) which are always different, but are still stored in the compressor table (which categorizes such headers as :incremental). In fact, removing some before generating and sending the headers frame stops generating the error (the compressor table is fairly reduced in such a case).

I've also tried to send the settings_header_table_size, 4096 explicitly to chrome, but this didn't help fix the issue. Do you have any idea, how could I debug this further?

Dual Protocol Server

I'm exploring this library for a ruby server which could serve both http 1 and 2. I saw the upgrade script, so my question would also be if this (dual-version serving) would be a reasonable idea, or whether one should just go full-http2 and upgrade all connections. My assumption is this is not a good idea, as a lot of clients just can't be upgraded to http2 due to dependency constraints (no compatible openssl, old kernel, old curl). But if my assumption is correct, what is the best way to identify the version?

I was thinking something like "read some data, check what's the advertised version in the first line, and continue accordingly". Is there a more correct way of doing this, or am I going in the right direction?

Sending RST_STREAM on closed Streams results in GOAWAY

I'm seeing a race condition on pushes of small amounts of data that are canceled by the client. If the push data is small enough and transferred quickly enough, the server may receive a client's RST_STREAM frame after the pushed stream has already been closed. In this case, the http-2 server cannot find the stream id in the @streams hash, so it treats it as as unexpected stream identifier and sends GOAWAY with frame 0, killing the connection. Since the closed push stream is actually valid, I believe this is not correct and the RST_STEAM can be silently ignored in this case.

I have a fix for this that will not throw the error if the stream is in the @streams_recently_closed hash.

Obsolete Rubocop syntax prevents code linting

There are some minor Rubocop style issues around which prevents the code from being checked for quality. See below for more details on this issue.

[jonesagyemang:~/Projects/starred_projects/http-2] master+ 2h26m7s ± rubocop
.rubocop.yml: Style/CaseIndentation has the wrong namespace - should be Layout
.rubocop.yml: Style/IndentHash has the wrong namespace - should be Layout
.rubocop.yml: Style/SpaceAroundOperators has the wrong namespace - should be Layout
.rubocop.yml: Style/ExtraSpacing has the wrong namespace - should be Layout
.rubocop_todo.yml: Style/IndentArray has the wrong namespace - should be Layout
.rubocop_todo.yml: Style/MultilineArrayBraceLayout has the wrong namespace - should be Layout
.rubocop_todo.yml: Style/MultilineHashBraceLayout has the wrong namespace - should be Layout
Error: obsolete parameter IndentWhenRelativeTo (for Layout/CaseIndentation) found in .rubocop.yml
`IndentWhenRelativeTo` has been renamed to `EnforcedStyle`
obsolete parameter AlignWith (for Lint/EndAlignment) found in .rubocop.yml
`AlignWith` has been renamed to `EnforcedStyleAlignWith`

Feature: Provide helper to build http2-settings header (for h2c)

As of now, it is only possible to encode full frames with the current API:

# inside connection
frame = {type: :settings, ...}
encode([frame])....

Which is fine for most uses. However, the h2c spec allows for encoding client setting in the HTTP2-settings frame. This is not the full frame, but only the payload part. This is a few layers down in the framer.

I'd propose to have a public API for returning only the encoded payload of frames (maybe harder), or to add a method to the connection/client to return the payload for the HTTP2-settings header. This is my current workaround:

# HTTP2 Client extension
def http2_settings
  payload = @local_settings.select { |k, v| v != HTTP2::SPEC_DEFAULT_CONNECTION_SETTINGS[k] }
  frame = { :type => :settings, :stream => 0, payload: Array(payload) }
  encode(frame).map do |f| 
    noheader_frame = f[9..-1] # remove header from http-2 frame
    Base64.urlsafe_encode64(noheader_frame)
  end.join
end	

How to set a request timeout?

Hi,
I couldn't find a way to set the timeout for a request (frame). Can you please point me in the right direction?

Thank you,
r.

Changelog

Is there a changelog available for this project? Trying to understand the impact of changes.

remove deep_dup usage in tests, use helper methods instead

Coming from #116 (comment)

The deep_dup method (the same as in ActiveSupport) shows some edge cases where it's mutating the hashes used in the tests. Whether this is a ruby bug or not is not the matter, but this could be fixed by relying on helper methods instead, which could be injected in the describe blocks where they are used.

Proxy layer

While original Net::HTTP has Proxy subclass, it would be nice to have access to same thing for http/2 protocol. Any ideas on this?

max concurrent streams: when to do when reached

I'm hitting a limit in a test I'm doing locally, specifically when I want to get a number of requests greater than max concurrent streams. Like, when the current defaults, if I send 102 requests on the same connection when the limit is 100, it handles the first 100, and it doesn't send any type of response from the server for the last 2 (in my specific case, the client hangs waiting for those 2 streams that never come).

First, I thought that this meant "streams can handle only X concurrent, when one closes, send the next one". However, this is not working like this, because if I set the 101th stream when one of the first 100 streams closes (:close callback), it will also hang.

Is this a bug, or a feature? Because there are a few things I was expecting both from the server (https://nghttp2.org, in this case) and the client, namely:

  • the server could send a GOAWAY for the exceeding streams (REFUSED_STREAM maybe?)
  • the client could fail when handling a number of concurrent streams greater than MAX
  • or, the client could fails after handling MAX streams
  • (after any of the above) the client would renegotiate settings with the server, thereby "resetting" the connection.

None of the above are backed by the spec (they're all expectations of mine), but I'll have a look to see what was to be expected in this case.

New Release

Any chance that the latest changes can be released? Or there's a pending blocker issue?

Syntax error

This commit introduced a syntax error:

+ def initialize(connection:, id:, weight: 16

/Users/george/repos/http-2/lib/http/2/stream.rb:74: syntax error, unexpected ',' (SyntaxError)
...    def initialize(connection:, id:, weight: 16, dependency:

I can fix the syntax errors (with nil), but this commit also removes ArgumentErrors. I didn't know if they should be removed.

If we need the ArgumentErrors, the we can revert the whole commit.

Issues properly closing

This is probably just a silly mistake on my part, but I'm trying to write a simple HTTP2 server wrapped in Celluloid (basically Reel for http2).

The issue I'm running into is I always get protocol errors after sending the final response. I tried basically copying the example server implementation in. Everything works fine with the example server/client. I'm not seeing where what I am doing differs significantly from the example. I'm hoping it's not actually celluloid or celluloid-io related, as I am trying to use those to support async IO (namely supporting multiple connections.

Here is the server I have. I test against the example client. Any thoughts on what might be the issue would be greatly appreciated!

require 'socket'
require 'openssl'
require 'http/2'
require 'celluloid/current'
require 'celluloid/io'
Celluloid.boot

module Ratchet
  class Server
    include Celluloid
    include Celluloid::IO
    include Celluloid::Internals::Logger
    DRAFT = 'h2'.freeze

    def initialize host, port, cert: nil, key: nil, **options
      if cert and key
        ctx = OpenSSL::SSL::SSLContext.new
        ctx.cert = OpenSSL::X509::Certificate.new cert
        ctx.key = OpenSSL::PKey::RSA.new key
      
        ctx.ssl_version = :TLSv1_2
        ctx.options = OpenSSL::SSL::SSLContext::DEFAULT_PARAMS[:options]
        ctx.ciphers = OpenSSL::SSL::SSLContext::DEFAULT_PARAMS[:ciphers]
      
        ctx.alpn_protocols = ['h2']
      
        ctx.alpn_select_cb = lambda do |protocols|
          raise "Protocol #{DRAFT} is required" if protocols.index(DRAFT).nil?
          DRAFT
        end
      
        ctx.tmp_ecdh_callback = lambda do |*_args|
          OpenSSL::PKey::EC.new 'prime256v1'
        end

        server = Celluloid::IO::TCPServer.new(host, port)
        @server = Celluloid::IO::SSLServer.new(server, ctx)
      else
        @server = TCPServer.new host, port
      end
    end
    def run
      loop {async.handle_connection @server.accept}
    end
    def handle_connection sock
      puts 'New TCP connection!'

      conn = HTTP2::Server.new
      conn.on(:frame) do |bytes|
        #puts "Writing bytes: #{bytes.unpack("H*").first}"
        #sock.is_a?(TCPSocket) ? sock.sendmsg(bytes) : sock.write(bytes)
        sock.write(bytes)
      end
      conn.on(:frame_sent) do |frame|
        puts "Sent frame: #{frame.inspect}"
      end
      conn.on(:frame_received) do |frame|
        puts "Received frame: #{frame.inspect}"
      end
      conn.on(:stream) do |stream|
        #log = Logger.new(stream.id)
        req, buffer = {}, ''
  
        stream.on(:active) { info 'client opened new stream' }
        stream.on(:close)  { info 'stream closed' }
  
        stream.on(:headers) do |h|
          req = Hash[*h.flatten]
          info "request headers: #{h}"
        end

        stream.on(:data) do |d|
          info "payload chunk: <<#{d}>>"
          buffer << d
        end
        stream.on(:half_close) do
          info 'client closed its end of the stream'
  
          response = nil
          if req[':method'] == 'POST'
            info "Received POST request, payload: #{buffer}"
            response = "Hello HTTP 2.0! POST payload: #{buffer}"
          else
            info 'Received GET request'
            response = 'Hello HTTP 2.0! GET request'
          end
  
          stream.headers({
            ':status' => '200',
            'content-length' => response.bytesize.to_s,
            'content-type' => 'text/plain',
          }, end_stream: false)
  
          # split response into multiple DATA frames
          stream.data(response.slice!(0, 5), end_stream: false)
          stream.data(response)
        end
      end
      while !sock.closed? && !(sock.eof? rescue true) # rubocop:disable Style/RescueModifier
        data = sock.readpartial(1024)
        # puts "Received bytes: #{data.unpack("H*").first}"
  
        begin
          conn << data
        rescue => e
          puts "#{e.class} exception: #{e.message} - closing socket."
          e.backtrace.each { |l| puts "\t" + l }
          sock.close
        end
      end
    end
  end
end

key = File.read(File.join(File.dirname(__FILE__), 'test.key'))
cert = File.read(File.join(File.dirname(__FILE__), 'test.crt'))
server = Ratchet::Server.new '0.0.0.0', 8080, key: key, cert: cert
server.run

Cannot send WINDOW_UPDATE for connection

Connection does not have a mechanism to send WINDOW_UPDATE.
Crafting a WINDOW_UPDATE frame and giving it to send method doesn't work,
because send is private.

Suggestion:

  • add window_update method to Connection (and stream), or
  • make send public

(Wilder idea: create a pseudo-stream object that corresponds to connection-level flow control,
and make it accessible from application. Caution: state management should be overridden by nop)

0.8.0 problem

I install 0.7.0,then rake -T ,ok
but 0.8.0, error following

$ rake -T
rake aborted!
SyntaxError: /Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/http-2-0.8.0/lib/http/2/stream.rb:74: syntax error, unexpected ','
... def initialize(connection:, id:, weight: 16, dependency:...
... ^
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/http-2-0.8.0/lib/http/2/stream.rb:74: Can't assign to false
...ependency: 0, exclusive: false, parent: nil)
... ^
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/http-2-0.8.0/lib/http/2/stream.rb:576: syntax error, unexpected keyword_end, expecting end-of-input
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:274:in require' /Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:274:inblock in require'
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:240:in load_dependency' /Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:274:inrequire'
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/http-2-0.8.0/lib/http/2.rb:13:in <top (required)>' /Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/bundler-1.8.0.pre/lib/bundler/runtime.rb:85:inrequire'
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/bundler-1.8.0.pre/lib/bundler/runtime.rb:85:in rescue in block in require' /Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/bundler-1.8.0.pre/lib/bundler/runtime.rb:68:inblock in require'
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/bundler-1.8.0.pre/lib/bundler/runtime.rb:61:in each' /Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/bundler-1.8.0.pre/lib/bundler/runtime.rb:61:inrequire'
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/gems/bundler-1.8.0.pre/lib/bundler.rb:134:in require' /Users/beslow/workspace/ddc_system/config/application.rb:7:in<top (required)>'
/Users/beslow/workspace/ddc_system/Rakefile:4:in require' /Users/beslow/workspace/ddc_system/Rakefile:4:in<top (required)>'
/Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/bin/ruby_executable_hooks:15:in eval' /Users/beslow/.rvm/gems/ruby-2.0.0-p481@ddc/bin/ruby_executable_hooks:15:in

'
(See full trace by running task with --trace)

Documentation for the promise?

Thank you for all your awesome work on HTTP2.

The following code from the docs seems strange:

  head = {
     ":status" => 200,
     ":path"   => "/other_resource",
     "content-type" => "text/plain"
    }

    # initiate server push stream
    stream.promise(head) do |push|
      push.headers({ ... })
      push.data(...)
    end

    # send response
    stream.headers({
      ":status" => 200,
      "content-type" => "text/plain"
    })

    # split response between multiple DATA frames
    stream.data(response_chunk, end_stream: false)
    promise.data(payload)
    stream.data(last_chunk)

It's not clear what data would be pushed in the promise. According to HPBN

PUSH_PROMISE frames, which signal the server’s intent to push the described resources to the client and need to be delivered ahead of the response data that requests the pushed resources.

What data would be sent in the PUSH_PROMISE if the data is supposed to be sent later?

Also, promise is not defined at this point: promise.data(payload). Is this supposed to refer to the promise stream? If so, how to access it? Or are the promise and the response stream the same? In that case, would it be stream.data(promise_data)?

Unexpected stream identifier

Hello @igrigorik,
Since upgrading from 0.8.2 to 0.8.3 nearly all of my tests in NetHttp2 fail (without changes to code). I'm basically implementing what is shown in examples so I'd welcome a few pointers to see what I may be doing wrong.

All of my tests fail because a HTTP2::Error::ProtocolError error is raised here on my dummy_server.rb when receiving a window_update frame such as {:length=>4, :type=>:window_update, :flags=>[], :stream=>1, :increment=>13}.

it looks like the server doesn't know how to handle window_update frames anymore... Is this possible or am I mistaken?

Connect from browsers?

Hello,

The ruby server and client works for me well. Now I'm wondering whether the ruby server can be connected by the browsers? When trying to do that, I always get the following log at server side:
New TCP connection!
Exception: HTTP2::Error::HandshakeError, HTTP2::Error::HandshakeError - closing socket.

which means the browser may not support the protocol. I'm using the latest Chrome on OSX.

Any idea to load the page on browser (maybe not Chrome)?

Many thanks!

Connection.activate_stream RuntimeError: can't add a new key into hash during iteration

We're experiencing a very intermittent error using http-2 by way of net-http2 by way of Apnotic

E, [2018-03-27T03:50:05.485152 #657] ERROR -- : Actor crashed!
RuntimeError: can't add a new key into hash during iteration
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/http-2-0.8.4/lib/http/2/connection.rb:669:in `activate_stream'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/http-2-0.8.4/lib/http/2/connection.rb:109:in `new_stream'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/net-http2-0.16.0/lib/net-http2/client.rb:86:in `new_stream'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/net-http2-0.16.0/lib/net-http2/client.rb:93:in `new_monitored_stream_for'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/net-http2-0.16.0/lib/net-http2/client.rb:40:in `call_async'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/apnotic-1.3.0/lib/apnotic/connection.rb:85:in `delayed_push_async'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/apnotic-1.3.0/lib/apnotic/connection.rb:49:in `push_async'
	/var/www/[removed]/releases/20170327013745/lib/[removed]/connection.rb:98:in `send_notification'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/calls.rb:28:in `public_send'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/calls.rb:28:in `dispatch'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/call/async.rb:7:in `dispatch'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/cell.rb:50:in `block in dispatch'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/cell.rb:76:in `block in task'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/actor.rb:337:in `block (2 levels) in task'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/task.rb:97:in `exclusive'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid.rb:421:in `exclusive'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/actor.rb:337:in `block in task'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/task.rb:44:in `block in initialize'
	/var/www/[removed]/shared/bundle/ruby/2.5.0/gems/celluloid-0.17.3/lib/celluloid/task/fibered.rb:14:in `block in create'

The issue at connection.rb:669 is not readily apparent to me. Perhaps this is a concurrency problem with some other part of the library iterating over this array at the same time?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.