Comments (10)
akka/akka#15799 (comment)
This is a pretty good discussion on this issue of how the client should/does handle the 100-continue header.
from falcon.
I'm interested in what @ioquatix's thoughts are, so any Puma implementation doesn't get too far out of whack with what other servers might want to do.
from falcon.
I have an example hack here: https://github.com/socketry/falcon/pull/206/files#diff-32e79c935c64dde33bd327b3b8c530bc4b842262ec53092b82830f7bb0177184 it's not clean but works. My example has other issues. Under load it doesn't work (several uploads fail). Didn't have the time for a thorough debugging.
from falcon.
I think most people think of HTTP as a request/response protocol.
Intermediate non-final responses break 99% of the interfaces people actually code against, e.g.
response = client.request("GET", "/")
# and
def server(request)
return [200, ...]
end
The only solution I have to this is to consider some kind of response chaining, e.g.:
response = client.request("GET", "/")
response.status # 100
while !response.final?
response = response.next
end
response.status # 200 or something else
(I'm not even sure how you'd do this with a post body - waiting for 100 continue on the client and then posting the body as a 2nd request??).
I don't think the benefits of non-final responses is commensurate with the interface complexity they introduce. That's my current opinion, but I'd be willing to change it if there was some awesome new use case I was not familiar with.
In addition, HTTP/2 already has stream cancellation for this kind of problem, and there is nothing wrong with cancelling a stream mid-flight, even for HTTP/1 - it's not as efficient as HTTP/2+ but it has a consistent semantic.
So: in summary, I think provisional responses introduce significant complexities to the interface both on the client AND server, and I don't think the value they add to a few small corner cases is worth that complexity. Remember that every step of the way, including proxies, etc has to handle it correctly.
The question in my mind is, do we want request/response
or request/response/response/response/response
?
from falcon.
My motivation for opening the issue on Puma was puma/puma#3188 (comment), redirecting where an upload should go (e.g. my client POST to my app, app responds with redirect to some cloud storage – the app has generated the (temporary) URL for that, my clients doesn't need to know how to auth with the cloud storage, just my app)
Since many years back the similar(?) feature 103 Early Hints
was implemented in Puma and also Falcon. A few years later rack/rack#1692 was opened and also rack/rack#1831 which started to discuss 100-continue
too. Looks like the goal was to include this in Rack 3 but obviously that didn't happen. :)
Not sure where we go from here, I still have much to read up on in regards to Rack and HTTP/2 and so on.
from falcon.
@ioquatix here's my idea for Puma: puma/puma#3188 (comment)
from falcon.
@dentarg That's an interesting use case.
Are there any clients that actually support the redirect mechanism as outlined?
from falcon.
@ioquatix Yes, looks like curl
does, see https://stackoverflow.com/questions/68172230/nginx-behavior-of-expect-100-continue-with-http-redirect
from falcon.
Any others?
from falcon.
That's a great link. One part that stands out to me:
It is amazing how many people got it wrong.
There be the dragons?
I think my interpretation most closely aligns with akka/akka#15799 (comment)
However, I appreciate how this might be possible to implement just as an internal detail. If that's true, what's the advantage?
This is not 100% correct. 100-Continue is only an optimization technique, not a protocol constraint.
This gives me some hope that maybe we don't need to expose it to the user... but it's followed by this:
This compounds the implementation complexity on the client-side: In the presence of an Expect: 100-continue request header the client must be prepared to either see 1 or 2 responses for the request, depending on the status code of the first response.
The level of complexity seems pretty high to me... and it's followed by this:
Many clients got the specs wrong, and that's why many servers actually always force Connection:close on any error response when expect-continue was in use. If you still have doubts, consult this answer from Roy Fielding: https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0324.html
Which makes me thing that the entire thing is not worth pursuing except if you enjoy suffering through the implementation and all the compatibility issues... in the best case, rejecting the incoming body, according to Dr Fielding, we have to close the connection, isn't that going to be worse for performance? i.e. isn't it easier just to close the connection if you want to reject the body? Not only that, but latency can be introduced by the client waiting for the 100
continue status.. I don't know if the original use case of redirecting the post body is valid, because apparently the client can just ignore the 100 continue and start sending the body anyway?
Finally, for me, probably the biggest bias I have, is this problem is already solved in HTTP/2+ since closing a stream is so easy... Maybe for HTTP/1 it kind of sucks, but for HTTP/2+ I feel like this is a non-issue.
I'm still intrigued and interested in where this discussion goes, but I'm not sure I have patience to actually do the implementation...
from falcon.
Related Issues (20)
- The problem with io-event update to 1.3.0 HOT 7
- Limiting incoming requests queue HOT 3
- Sinatra - cannot load such file -- sinatra/base HOT 2
- Concurrent Database Transaction worries HOT 2
- Getting a Errno::EPIPE: Broken pipe warning HOT 11
- Socket::ResolutionError: getnameinfo: Temporary failure in name resolution HOT 10
- Should the 'virtual' command be changing the Console.logger log level? HOT 4
- Plugin `:tmp_restart` HOT 2
- Falcon not working properly locally on MacOS (m1) in forked mode HOT 18
- Production ready? HOT 1
- How to install? HOT 4
- Falcon 0.44 broken when used with Rack 2 HOT 7
- Falcon no longer respects count, new likely increases memory usage on shared hosts HOT 8
- Core methods doesn't work in configuration block after upgrading Falcon to 0.44 HOT 1
- async-io still in use HOT 3
- `falcon proxy` prints an exception if no `paths` are given
- Sure! Here's what we're trying to do. This is a bit simplified, but the issue with Rack vs. Rails responses still applies. HOT 4
- Cleartext HTTP/2 connections HOT 6
- LocalJumpError after migration to 0.45.1 HOT 3
- pause/resume for tasks HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from falcon.