Comments (12)
Another point to take into consideration is to not gzip an already gzipped content, something like this:
$ git diff
diff --git gzip.go gzip.go
index c47039f..36ccd1b 100644
--- gzip.go
+++ gzip.go
@@ -102,10 +102,11 @@ func (w *GzipResponseWriter) Write(b []byte) (int, error) {
// On the first write, w.buf changes from nil to a valid slice
w.buf = append(w.buf, b...)
+ _, alreadyEncoded := w.Header()[contentEncoding]
// If the global writes are bigger than the minSize and we're about to write
// a response containing a content type we want to handle, enable
// compression.
- if len(w.buf) >= w.minSize && handleContentType(w.contentTypes, w) {
+ if !alreadyEncoded && len(w.buf) >= w.minSize && handleContentType(w.contentTypes, w) {
err := w.startGzip()
if err != nil {
return 0, err
I'm currently using gziphandler
in a server that serves content either "locally" from my server, or reverse proxied off from other servers. Currently I'm using a stack of http handlers that includes gziphandler, and for all requests that are to be proxied I do r.Header.Del("Accept-Encoding")
which means the content will be gzipped "locally" by gziphandler. To avoid this (and to reduce CPU usage on my server) I would have to create two stacks of http handlers -one with gziphandler and another without so that all the gzipping is done at the proxied servers and my server would then just pass the response through without modification. If gziphalder had the above code then I don't have to create two stacks for http handlers, which could simplify my code.
from gziphandler.
That PR resolves the issue of not recompressing responses with the content-encoding header, but this issue is about not compressing responses based on the content-type header.
from gziphandler.
This was fixed with the merging of #51.
from gziphandler.
If this is fixed, it might be time for close of this issue, isn't it?
I'm discovering this project and most open issues seem to be in fact done. Discovering that after the read of a long discussion is unexpected.
from gziphandler.
This sounds like a nice idea, but the combinatorial explosion of config options might be getting out of hand.
from gziphandler.
from gziphandler.
I like config structs, but I don't think this functionality belongs in this middleware. I'm not strongly against it, and the implementation would be trivial, but I'd like to hear some concrete use-cases first. I rarely encounter endpoints which may or may not want their responses compressed.
from gziphandler.
For API servers, sending content uncompressed is not needed. For this use-case the current behaviour is perfect. This is the primary way I too use this middleware.
But if I am writing a server that serves full website (images, videos, binary, etc) then gziping the payload is often redundant. Practically no full-feature webserver would compress jpg, png, mp4, etc since these formats dont have enough gain. This is especially true for videos... running on-the-fly gzip on 1 GB video file to save few KBs is quite wasteful. In this case, I had to write my own filter based on file-extension to enable/disable compression, but I would rather do it based off response Content-Type.
from gziphandler.
On config / options : functional options
Personally I really like this approach.
from gziphandler.
from gziphandler.
I'd like this — we proxy assets through go right now (which we arguably shouldn't), and it'd be nice to compress the javascript files but not the pngs.
from gziphandler.
You all win 😁 This has been resolved with #55
from gziphandler.
Related Issues (20)
- Static files from ServeFile HOT 2
- Incorrect check for short write
- $maccon
- State of the package? HOT 1
- If not handleContenttype it buffers entire content
- make acceptsGzip public HOT 4
- Whitelist by mime type HOT 7
- Handling HEAD requests
- Do not set Content-Length on OPTIONS requests
- The NYTimes GitHub org will be renamed to nytimes HOT 4
- go mod depends on 1.12 HOT 11
- Range-Requests aren't properly handled
- Should Content-Length be set on gzipped responses? HOT 1
- Align Go package casing with VCS URL casing HOT 1
- Accept func(w http.ResponseWriter, r *http.Request) literals
- How to force flushing from the wrapped handler? HOT 4
- Show the error of gw.Close()
- Vary: Accept-encoding header is duplicated if inner handler sets it HOT 2
- Swappable gzip implementation? HOT 7
- "identity" Content-Encoding should also be compressed HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gziphandler.