tunnelto
lets you expose your locally running web server via a public URL.
Written in Rust. Built completely with async-io on top of tokio.
brew install agrinman/tap/tunnelto
cargo install tunnelto
Or Download a release for your target OS here: tunnelto/releases
tunnelto --port 8000
The above command opens a tunnel and forwards traffic to localhost:8000
.
⇢ tunnelto --help
tunnelto 0.1.6
Expose your local web server to the internet with a public url.
USAGE:
tunnelto [FLAGS] [OPTIONS] [SUBCOMMAND]
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
-v, --verbose A level of verbosity, and can be used multiple times
OPTIONS:
-k, --key <key> Sets an API authentication key to use for this tunnel
-p, --port <port> Sets the port to forward incoming tunnel traffic to on localhost [default: 8000]
-s, --subdomain <sub-domain> Specify a sub-domain for this tunnel
SUBCOMMANDS:
help Prints this message or the help of the given subcommand(s)
set-auth Store the API Authentication key
- Compile the server for the musl target. See the
musl_build.sh
for a way to do this trivially with Docker! - See
Dockerfile
for a simple alpine based image that runs that server binary. - Deploy the image where ever you want.
# Run the Server: xpects TCP traffic on 8080 and control websockets on 5000
ALLOWED_HOSTS="localhost" ALLOW_UNKNOWN_CLIENTS=1 cargo run --bin tunnelto_server
# Run a local tunnelto client talking to your local tunnelto_server
WORMHOLE_HOST="localhost" WORMHOLE_PORT=5000 TLS_OFF=1 cargo run --bin tunnelto -- start -p 8000
# Test it out!
# Remember 8080 is our local tunnelto TCP server
curl -H '<subdomain>.localhost' "http://localhost:8080/some_path?with=somequery"
ALLOWED_HOSTS
: which hostname suffixes do we allow forwarding onSECRET_KEY
: an authentication key for restricting access to your tunnelto serverALLOW_UNKNOWN_CLIENTS
: a boolean flag, if set, enables unknown (no authentication) clients to use your tunnelto server. Note that unknown clients are not allowed to chose a subdomain via-s
.
This implementation does not support multiple running servers (i.e. centralized coordination). Therefore, if you deploy multiple instances of the server, it will only work if the client connects to the same instance as the remote TCP stream.