GithubHelp home page GithubHelp logo

Comments (13)

adamfisk avatar adamfisk commented on June 5, 2024

The hprof file for this is now uploaded at https://littleshoot.s3.amazonaws.com/littleproxy-memory-leak.hprof

from littleproxy.

leahxschmidt avatar leahxschmidt commented on June 5, 2024

There seems to be no limit to how large the proxy's buffer will be. If a client is downloading at 1B/s, while the server is capable of serving at a much higher rate, the proxy will simply buffer the entire response. If the client terminates (before downloading the whole buffer), the proxy does seem to release this memory.

Is it possible that there is some process that starts downloading a large stream, and then forgets to read from it?

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

Interesting. So LittleProxy/Netty auto-chunk responses, streaming incoming chunks from a server to the client connection that is proxied. So the entire response isn't held in memory unless there's a bug. It is held in memory for smaller responses, but not large ones, regardless of whether or not the target content uses Transport-Encoding: chunked.

It seems like in any case the associated channels aren't getting cleaned up, however, or there's a bug related to chunking.

One test would be to run LittleProxy locally with a really low -Xmx, so -Xmx100m, run a local server with say python -m SimpleHTTPServer, give it a static file of 200 MB or so, and try to download it with curl, setting LittleProxy as the proxy. That should theoretically give the same error.

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

Just had a chance to play with this actually, and it somewhat unfortunately doesn't hit an OOME for me in that scenario. The hprof file itself is actually handy for testing. Didn't monitor the memory as I was doing it, but set -Xmx100m and grabbed that file through the proxy no problem, with the file being 1164M.

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

It's also pretty easy to test at least the vanilla "forgets to read from it" scenario with the above setup, and it certainly looks like LittleProxy is correctly closing the connection to the target server in the case where the client kills the connection.

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

Uploaded the log file as well in case it's useful:

https://littleshoot.s3.amazonaws.com/HeapDumpOnOutOfMemoryError-out.txt

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

Ahh yes, it is along the lines you mentioned:

curl --limit-rate 10k -x 127.0.0.1:8080 -O http://127.0.0.1/java_pid20562.hprof

Gives OOME in no time!!

from littleproxy.

md-5 avatar md-5 commented on June 5, 2024

That was a good spot leahx!

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

FYI I just posted this to StackOverflow at:

http://stackoverflow.com/questions/14371207/is-there-any-way-to-read-from-one-netty-channel-only-as-fast-as-you-can-write-to

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

The above commit covers the case where the target server is faster than the client but not the other way around. The fixed case is by far the more common in practice, but the other needs fixing too.

from littleproxy.

nasis avatar nasis commented on June 5, 2024

Adam the commit misses some files and the build breaks. Also, are all new commits go to 0.5-SNAPSHOT? That leads to some difficulty if I want to stay with a stable version.

from littleproxy.

adamfisk avatar adamfisk commented on June 5, 2024

Whoops - sorry. Forgot to commit a couple of test classes. I've been trying to keep HEAD quite stable, but new commits are going to 0.5-SNAPSHOT. This was a pretty big bug at least we were hitting fairly quickly in production, so I wanted to push the fix as soon as possible. I would be surprised if there are problems with it, but of course it's hard to say 100%. My thought was to make the fix for uploads as well and then to release 0.5 as final before moving on to 0.6-SNAPSHOT for real. Apologies if it has caused problems.

from littleproxy.

subnetmarco avatar subnetmarco commented on June 5, 2024

+1

from littleproxy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.