GithubHelp home page GithubHelp logo

googlecloudplatform / cloud-sql-proxy Goto Github PK

View Code? Open in Web Editor NEW
1.2K 65.0 345.0 3.46 MB

A utility for connecting securely to your Cloud SQL instances

License: Apache License 2.0

Go 99.15% Dockerfile 0.60% Batchfile 0.26%
cloud-sql-proxy cloud-sql libraries google-cloud-platform gcp google-cloud

cloud-sql-proxy's Introduction

Cloud SQL Auth Proxy

CI

The Cloud SQL Auth Proxy is a utility for ensuring secure connections to your Cloud SQL instances. It provides IAM authorization, allowing you to control who can connect to your instance through IAM permissions, and TLS 1.3 encryption, without having to manage certificates.

See the Connecting Overview page for more information on connecting to a Cloud SQL instance, or the About the Proxy page for details on how the Cloud SQL Proxy works.

The Cloud SQL Auth Proxy has support for:

If you're using Go, Java, Python, or Node.js, consider using the corresponding Cloud SQL connector which does everything the Proxy does, but in process:

For users migrating from v1, see the Migration Guide. The v1 README is still available.

Important

The Proxy does not configure the network between the VM it's running on and the Cloud SQL instance. You MUST ensure the Proxy can reach your Cloud SQL instance, either by deploying it in a VPC that has access to your Private IP instance, or by configuring Public IP.

Installation

Check for the latest version on the releases page and use the following instructions for your OS and CPU architecture.

Linux amd64
# see Releases for other versions
URL="https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3"

curl "$URL/cloud-sql-proxy.linux.amd64" -o cloud-sql-proxy

chmod +x cloud-sql-proxy
Linux 386
# see Releases for other versions
URL="https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3"

curl "$URL/cloud-sql-proxy.linux.386" -o cloud-sql-proxy

chmod +x cloud-sql-proxy
Linux arm64
# see Releases for other versions
URL="https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3"

curl "$URL/cloud-sql-proxy.linux.arm64" -o cloud-sql-proxy

chmod +x cloud-sql-proxy
Linux arm
# see Releases for other versions
URL="https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3"

curl "$URL/cloud-sql-proxy.linux.arm" -o cloud-sql-proxy

chmod +x cloud-sql-proxy
Mac (Intel)
# see Releases for other versions
URL="https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3"

curl "$URL/cloud-sql-proxy.darwin.amd64" -o cloud-sql-proxy

chmod +x cloud-sql-proxy
Mac (Apple Silicon)
# see Releases for other versions
URL="https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3"

curl "$URL/cloud-sql-proxy.darwin.arm64" -o cloud-sql-proxy

chmod +x cloud-sql-proxy
Windows x64
# see Releases for other versions
curl https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3/cloud-sql-proxy.x64.exe -o cloud-sql-proxy.exe
Windows x86
# see Releases for other versions
curl https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.11.3/cloud-sql-proxy.x86.exe -o cloud-sql-proxy.exe

Install from Source

To install from source, ensure you have the latest version of Go installed.

Then, simply run:

go install github.com/GoogleCloudPlatform/cloud-sql-proxy/v2@latest

The cloud-sql-proxy will be placed in $GOPATH/bin or $HOME/go/bin.

Usage

The following examples all reference an INSTANCE_CONNECTION_NAME, which takes the form: myproject:myregion:myinstance.

To find your Cloud SQL instance's INSTANCE_CONNECTION_NAME, visit the detail page of your Cloud SQL instance in the console, or use gcloud with:

gcloud sql instances describe <INSTANCE_NAME> --format='value(connectionName)'

Credentials

The Cloud SQL Proxy uses a Cloud IAM principal to authorize connections against a Cloud SQL instance. The Proxy sources the credentials using Application Default Credentials.

Note

Any IAM principal connecting to a Cloud SQL database will need one of the following IAM roles:

  • Cloud SQL Client (preferred)
  • Cloud SQL Editor
  • Cloud SQL Admin

Or one may manually assign the following IAM permissions:

  • cloudsql.instances.connect
  • cloudsql.instances.get

See Roles and Permissions in Cloud SQL for details.

When the Proxy authenticates under the Compute Engine VM's default service account, the VM must have at least the sqlservice.admin API scope (i.e., "https://www.googleapis.com/auth/sqlservice.admin") and the associated project must have the SQL Admin API enabled. The default service account must also have at least writer or editor privileges to any projects of target SQL instances.

The Proxy also supports three flags related to credentials:

  • --token to use an OAuth2 token
  • --credentials-file to use a service account key file
  • --gcloud-auth to use the Gcloud user's credentials (local development only)

Basic Usage

To start the Proxy, use:

# starts the Proxy listening on localhost with the default database engine port
# For example:
#   MySQL      localhost:3306
#   Postgres   localhost:5432
#   SQL Server localhost:1433
./cloud-sql-proxy <INSTANCE_CONNECTION_NAME>

The Proxy will automatically detect the default database engine's port and start a corresponding listener. Production deployments should use the --port flag to reduce startup time.

The Proxy supports multiple instances:

./cloud-sql-proxy <INSTANCE_CONNECTION_NAME_1> <INSTANCE_CONNECTION_NAME_2>

Configuring Port

To override the port, use the --port flag:

# Starts a listener on localhost:6000
./cloud-sql-proxy --port 6000 <INSTANCE_CONNECTION_NAME>

When specifying multiple instances, the port will increment from the flag value:

# Starts a listener on localhost:6000 for INSTANCE_CONNECTION_1
# and localhost:6001 for INSTANCE_CONNECTION_NAME_2.
./cloud-sql-proxy --port 6000 <INSTANCE_CONNECTION_NAME_1> <INSTANCE_CONNECTION_NAME_2>

To configure ports on a per instance basis, use the port query param:

# Starts a listener on localhost:5000 for the instance called "postgres"
# and starts a listener on localhost:6000 for the instance called "mysql"
./cloud-sql-proxy \
    'myproject:my-region:postgres?port=5000' \
    'myproject:my-region:mysql?port=6000'

Configuring Listening Address

To override the choice of localhost, use the --address flag:

# Starts a listener on all interfaces at port 5432
./cloud-sql-proxy --address 0.0.0.0 <INSTANCE_CONNECTION_NAME>

To override address on a per-instance basis, use the address query param:

# Starts a listener on 0.0.0.0 for "postgres" at port 5432
# and a listener on 10.0.0.1:3306 for "mysql"
./cloud-sql-proxy \
    'myproject:my-region:postgres?address=0.0.0.0' \
    'myproject:my-region:mysql?address=10.0.0.1"

Configuring Private IP

By default, the Proxy attempts to connect to an instance's public IP. To enable private IP, use:

# Starts a listener connected to the private IP of the Cloud SQL instance.
# Note: there must be a network path present for this to work.
./cloud-sql-proxy --private-ip <INSTANCE_CONNECTION_NAME>

Important

The Proxy does not configure the network. You MUST ensure the Proxy can reach your Cloud SQL instance, either by deploying it in a VPC that has access to your Private IP instance, or by configuring Public IP.

Configuring Unix domain sockets

The Proxy also supports Unix domain sockets. To start the Proxy with Unix sockets, run:

# Uses the directory "/mycooldir" to create a Unix socket
# For example, the following directory would be created:
#   /mycooldir/myproject:myregion:myinstance
./cloud-sql-proxy --unix-socket /mycooldir <INSTANCE_CONNECTION_NAME>

To configure a Unix domain socket on a per-instance basis, use the unix-socket query param:

# Starts a TCP listener on localhost:5432 for "postgres"
# and creates a Unix domain socket for "mysql":
#     /cloudsql/myproject:my-region:mysql
./cloud-sql-proxy \
    myproject:my-region:postgres \
    'myproject:my-region:mysql?unix-socket=/cloudsql'

Note

The Proxy supports Unix domain sockets on recent versions of Windows, but replaces colons with periods:

# Starts a Unix domain socket at the path:
#    C:\cloudsql\myproject.my-region.mysql
./cloud-sql-proxy --unix-socket C:\cloudsql myproject:my-region:mysql

Testing Connectivity

The Proxy includes support for a connection test on startup. This test helps ensure the Proxy can reach the associated instance and is a quick debugging tool. The test will attempt to connect to the specified instance(s) and fail if the instance is unreachable. If the test fails, the Proxy will exit with a non-zero exit code.

./cloud-sql-proxy --run-connection-test <INSTANCE_CONNECTION_NAME>

Config file

The Proxy supports a configuration file. Supported file types are TOML, JSON, and YAML. Load the file with the --config-file flag:

./cloud-sql-proxy --config-file /path/to/config.[toml|json|yaml]

The configuration file format supports all flags. The key names should match the flag names. For example:

# use instance-connection-name-0, instance-connection-name-1, etc.
# for multiple instances
instance-connection-name = "proj:region:inst"
auto-iam-authn = true
debug = true
debug-logs = true

Run ./cloud-sql-proxy --help for more details.

Configuring a Lazy Refresh

The --lazy-refresh flag configures the Proxy to retrieve connection info lazily and as-needed. Otherwise, no background refresh cycle runs. This setting is useful in environments where the CPU may be throttled outside of a request context, e.g., Cloud Run, Cloud Functions, etc.

Additional flags

To see a full list of flags, use:

./cloud-sql-proxy --help

Container Images

There are containerized versions of the Proxy available from the following Google Cloud Container Registry repositories:

  • gcr.io/cloud-sql-connectors/cloud-sql-proxy
  • us.gcr.io/cloud-sql-connectors/cloud-sql-proxy
  • eu.gcr.io/cloud-sql-connectors/cloud-sql-proxy
  • asia.gcr.io/cloud-sql-connectors/cloud-sql-proxy

Each image is tagged with the associated Proxy version. The following tags are currently supported:

  • $VERSION (default)
  • $VERSION-alpine
  • $VERSION-buster
  • $VERSION-bullseye

The $VERSION is the Proxy version without the leading "v" (e.g., 2.11.3).

For example, to pull a particular version, use a command like:

# $VERSION is 2.11.3
docker pull gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.3

We recommend pinning to a specific version tag and using automation with a CI pipeline to update regularly.

The default container image uses distroless with a non-root user. If you need a shell or related tools, use the Alpine or Buster images listed above.

Working with Docker and the Proxy

The containers have the proxy as an ENTRYPOINT so, to use the proxy from a container, all you need to do is specify options using the command, and expose the proxy's internal port to the host. For example, you can use:

docker run --publish <host-port>:<proxy-port> \
    gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.3 \
    --address "0.0.0.0" --port <proxy-port> <instance-connection-name>

You'll need the --address "0.0.0.0" so that the proxy doesn't only listen for connections originating from within the container.

You will need to authenticate using one of the methods outlined in the credentials section. If using a credentials file you must mount the file and ensure that the non-root user that runs the proxy has read access to the file. These alternatives might help:

  1. Change the group of your local file and add read permissions to the group with chgrp 65532 key.json && chmod g+r key.json.
  2. If you can't control your file's group, you can directly change the public permissions of your file by doing chmod o+r key.json.

Warning

This can be insecure because it allows any user in the host system to read the credential file which they can use to authenticate to services in GCP.

For example, a full command using a JSON credentials file might look like

docker run \
    --publish <host-port>:<proxy-port> \
    --mount type=bind,source="$(pwd)"/sa.json,target=/config/sa.json \
    gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.3 \
    --address 0.0.0.0 \
    --port <proxy-port> \
    --credentials-file /config/sa.json <instance-connection-name>

Running as a Kubernetes Sidecar

See the example here as well as Connecting from Google Kubernetes Engine.

Running behind a Socks5 proxy

The Cloud SQL Auth Proxy includes support for sending requests through a SOCKS5 proxy. If a SOCKS5 proxy is running on localhost:8000, the command to start the Cloud SQL Auth Proxy would look like:

ALL_PROXY=socks5://localhost:8000 \
HTTPS_PROXY=socks5://localhost:8000 \
    cloud-sql-proxy <INSTANCE_CONNECTION_NAME>

The ALL_PROXY environment variable specifies the proxy for all TCP traffic to and from a Cloud SQL instance. The ALL_PROXY environment variable supports socks5 and socks5h protocols. To route DNS lookups through a proxy, use the socks5h protocol.

The HTTPS_PROXY (or HTTP_PROXY) specifies the proxy for all HTTP(S) traffic to the SQL Admin API. Specifying HTTPS_PROXY or HTTP_PROXY is only necessary when you want to proxy this traffic. Otherwise, it is optional. See http.ProxyFromEnvironment for possible values.

Support for Metrics and Tracing

The Proxy supports Cloud Monitoring, Cloud Trace, and Prometheus.

Supported metrics include:

  • cloudsqlconn/dial_latency: The distribution of dialer latencies (ms)
  • cloudsqlconn/open_connections: The current number of open Cloud SQL connections
  • cloudsqlconn/dial_failure_count: The number of failed dial attempts
  • cloudsqlconn/refresh_success_count: The number of successful certificate refresh operations
  • cloudsqlconn/refresh_failure_count: The number of failed refresh operations.

Supported traces include:

  • cloud.google.com/go/cloudsqlconn.Dial: The dial operation including refreshing an ephemeral certificate and connecting the instance
  • cloud.google.com/go/cloudsqlconn/internal.InstanceInfo: The call to retrieve instance metadata (e.g., database engine type, IP address, etc)
  • cloud.google.com/go/cloudsqlconn/internal.Connect: The connection attempt using the ephemeral certificate
  • SQL Admin API client operations

To enable Cloud Monitoring and Cloud Trace, use the --telemetry-project flag with the project where you want to view metrics and traces. To configure the metrics prefix used by Cloud Monitoring, use the --telemetry-prefix flag. When enabling telemetry, both Cloud Monitoring and Cloud Trace are enabled. To disable Cloud Monitoring, use --disable-metrics. To disable Cloud Trace, use --disable-traces.

To enable Prometheus, use the --prometheus flag. This will start an HTTP server on localhost with a /metrics endpoint. The Prometheus namespace may optionally be set with --prometheus-namespace.

Debug logging

To enable debug logging to report on internal certificate refresh operations, use the --debug-logs flag. Typical use of the Proxy should not require debug logs, but if you are surprised by the Proxy's behavior, debug logging should provide insight into internal operations and can help when reporting issues.

Localhost Admin Server

The Proxy includes support for an admin server on localhost. By default, the the admin server is not enabled. To enable the server, pass the --debug or --quitquitquit flag. This will start the server on localhost at port 9091. To change the port, use the --admin-port flag.

When --debug is set, the admin server enables Go's profiler available at /debug/pprof/.

See the documentation on pprof for details on how to use the profiler.

When --quitquitquit is set, the admin server adds an endpoint at /quitquitquit. The admin server exits gracefully when it receives a POST request at /quitquitquit.

Frequently Asked Questions

Why would I use the Proxy?

The Proxy is a convenient way to control access to your database using IAM permissions while ensuring a secure connection to your Cloud SQL instance. When using the Proxy, you do not have to manage database client certificates, configured Authorized Networks, or ensure clients connect securely. The Proxy handles all of this for you.

How should I use the Proxy?

The Proxy is a gateway to your Cloud SQL instance. Clients connect to the Proxy over an unencrypted connection and are authorized using the environment's IAM principal. The Proxy then encrypts the connection to your Cloud SQL instance.

Because client connections are not encrypted and authorized using the environment's IAM principal, we recommend running the Proxy on the same VM or Kubernetes pod as your application and using the Proxy's default behavior of allowing connections from only the local network interface. This is the most secure configuration: unencrypted traffic does not leave the VM, and only connections from applications on the VM are allowed.

Here are some common examples of how to run the Proxy in different environments:

Why can't the Proxy connect to my private IP instance?

The Proxy does not configure the network between the VM it's running on and the Cloud SQL instance. You MUST ensure the Proxy can reach your Cloud SQL instance, either by deploying it in a VPC that has access to your Private IP instance, or by configuring Public IP.

Should I use the Proxy for large deployments?

We recommend deploying the Proxy on the host machines that are running the application. However, large deployments may exceed the request quota for the SQL Admin API . If your Proxy reports request quota errors, we recommend deploying the Proxy with a connection pooler like pgbouncer or ProxySQL. For details, see Running the Cloud SQL Proxy as a Service.

Can I share the Proxy across multiple applications?

Instead of using a single Proxy across multiple applications, we recommend using one Proxy instance for every application process. The Proxy uses the context's IAM principal and so have a 1-to-1 mapping between application and IAM principal is best. If multiple applications use the same Proxy instance, then it becomes unclear from an IAM perspective which principal is doing what.

How do I verify the shasum of a downloaded Proxy binary?

After downloading a binary from the releases page, copy the sha256sum value that corresponds with the binary you chose.

Then run this command (make sure to add the asterisk before the file name):

echo '<RELEASE_PAGE_SHA_HERE> *<NAME_OF_FILE_HERE>' | shasum -c

For example, after downloading the v2.1.0 release of the Linux AMD64 Proxy, you would run:

$ echo "547b24faf0dfe5e3d16bbc9f751dfa6b34dfd5e83f618f43a2988283de5208f2 *cloud-sql-proxy" | shasum -c
cloud-sql-proxy: OK

If you see OK, the binary is a verified match.

Reference Documentation

Support policy

Major version lifecycle

This project uses semantic versioning, and uses the following lifecycle regarding support for a major version:

  • Active - Active versions get all new features and security fixes (that wouldnโ€™t otherwise introduce a breaking change). New major versions are guaranteed to be "active" for a minimum of 1 year.

  • Maintenance - Maintenance versions continue to receive security and critical bug fixes, but do not receive new features.

Release cadence

The Cloud SQL Auth Proxy aims for a minimum monthly release cadence. If no new features or fixes have been added, a new PATCH version with the latest dependencies is released.

We support releases for 1 year from the release date.

Contributing

Contributions are welcome. Please, see the CONTRIBUTING document for details.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Contributor Code of Conduct for more information.

cloud-sql-proxy's People

Contributors

ankushagarwal avatar annafang-google avatar athenashi avatar broady avatar carrotman42 avatar dependabot[bot] avatar digiexchris avatar dlorenc avatar easwars avatar enocom avatar fogs avatar github-actions[bot] avatar google-cloud-policy-bot[bot] avatar hessjcg avatar hfwang avatar hilts-vaughan avatar jackwotherspoon avatar jellybeanfiend avatar kirbyquerby avatar kurtisvg avatar lychung83 avatar mandyh2018 avatar minodisk avatar monazhn avatar pdecat avatar piaxc avatar release-please[bot] avatar renovate-bot avatar shubha-rajan avatar ttosta-google avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-sql-proxy's Issues

Dialing Freezes

Running dialPassword from a local Go unit test to a 2nd gen. MySQL server. Not a lot different from the example provided in you package. My instance is white-listing all IPv4 connections right now. I have the right username and password. Dies after 3 minutes hang time. Exit status 2, no log.

How to Programmatically check if proxy is active

Any way on local dev to conditionally check from my application if gcloud_sql_proxy active?
I'm on Mac OS X El Capitan, running Wordpress and deploying to GAE flex php environment.
My use case is scripting a conditional check in wp-config.php so I can set different database connection parameters if Wordpress is using the Cloud SQL instance or local MySQL.
But I can't find anything different between when gcloud_sql_proxy is running and when it's not for php to check against.
Maybe the unix socket?

OpenBSD build fails

I am trying to build cloudproxy on OpenBSD 5.9 amd64.

On Ubuntu, "go get github.com/GoogleCloudPlatform/cloudsql-proxy/cmd/cloud_sql_proxy" succeeds with go 1.63, but it fails on OpenBSD as:

go/src/bazil.org/fuse/error_std.go:27 undefined: errNoXattr
go/src/bazil.org/fuse/fuse.go:1345 undefined: attr
go/src/bazil.org/fuse/fuse_kernel.go:404 undefined: attr

I used go-1.5 (package) and go-1.6 (from source).
Is there any way to build without fuse support, as I do not need fuse on OpenBSD ?

Running cloudsql-proxy as Kubernetes DaemonSet

I'd like to run the cloudsql-proxy container as a DaemonSet instead of a sidecar in my pods, because I have multiple (different) pods on a node, which all need to connect to a Cloud SQL instance. So instead of:

volumes:
- emptyDir:
  name: cloudsql-sockets

I use:

volumes:
- hostPath:
    path: /cloudsql
  name: cloudsql-sockets

So the other pods can just mount the hostPath /cloudsql/ (read-only) to load the UNIX sockets.

However, when I try to start the cloudsql-proxy container, it gives me this error:

Error syncing pod, skipping: failed to "StartContainer" for "cloudsql-proxy" with RunContainerError: "runContainer: Error response from daemon: mkdir /cloudsql: read-only file system"

According to the Kubernetes docs, when using hostPath, only root can write to it. So containers which want to write to the mounted hostPath should also be using user root. Is this not the case for cloudsql-proxy container?

One solution would be using TCP sockets, but I prefer UNIX sockets.

Provide better error message if service account does not have the required scopes

The current error message is:
POST "https://www.googleapis.com/sql/v1beta4/projects/project/instances/instance/createEphemeral": 403 Forbidden; Body="{\n "error": {\n "errors": [\n {\n "domain": "global",\n "reason": "insufficientPermissions",\n "message": "Insufficient Permission"\n }\n ],\n "code": 403,\n "message": "Insufficient Permission"\n }\n}\n"; read error:

If reason is "insufficientPermissions", we should provide a nice error message about the required scopes.
I don't believe "insufficientPermissions" shows up in any other scenarios other than missing the scopes.

Errors during Post to createEphemeral: PROTOCOL_ERROR

We are receiving intermittent reports that the Cloud SQL Proxy is receiving errors which look similar to this:

couldn't connect to "$INSTANCE": Post https://www.googleapis.com/sql/v1beta4/projects/$PROJECT/$NAME/createEphemeral?alt=json: stream error: stream ID 1; PROTOCOL_ERROR

If you are encountering this problem, please send the following information to [email protected]:

  1. Cloud SQL Instance name and project name
  2. What version of the proxy you are using (pass --version to have it printed out)
  3. Restart the Cloud SQL Proxy, setting an environment variable GODEBUG=http2debug=2 (as per Go HTTP docs). When the PROTOCOL_ERROR occurs again, there should be extra debug logs. Please strip out secret data such as bearer tokens, etc and include this in your email

In addition, we have heard reports that updating to the newest version (1.09) fixes this problem. We'd especially like to hear from anyone running this version that is still affected.

Create official Helm chart

The official docs recommend the "sidecar pattern" for deploying to Google Container Engine. It's painstaking to follow the instructions currently, having to hand wire secrets up and carefully splice lines of code together from the sample YAML.

It'll be nice to have an official google/cloudsql-proxy Helm chart.

Install with a single helm install google/cloudsql-proxy -f values.yaml command.

In addition, I would like to suggest instructions for a standalone pod/service hosting cloud_sql_proxy that listens on tcp:0.0.0.0:3306, also installed using a Helm chart. Yes, it's less secure than listening on localhost, but for those who have secured their cluster properly, decoupling Cloud SQL proxy from the application pod greatly simplifies deployment.

Staying alive on SIGTERM

Hi, I'm wondering if it's possible to add an option to keep cloudsql-proxy from exiting on receiving a SIGTERM.

I'm running cloudsql-proxy on Kubernetes in a pod alongside a web app. When Kubernetes deletes a pod, it sends a SIGTERM to both cloudsql-proxy and my web app and then sends a SIGKILL 30 seconds later. Upon receiving the SIGTERM, my web app performs a graceful shutdown by draining the requests in flight, but cloudsql-proxy shuts down immediately. This means that the requests being drained fail if they need any more access to the database.

It'd be great if I could configure cloudsql-proxy to stay alive after receiving a SIGTERM so my web app can drain requests properly. Eventually, cloudsql-proxy can exit upon receiving a SIGKILL.

Client closed local connection

I just updated from MySQL to PostgresSQL. With MySQL all was fine, but now After establishing connection to proxy, from my machine, proxy close all connections from my connection pool .

2017-03-25T00:01:43.528146150Z 2017/03/25 00:01:43 using credential file for authentication; [email protected] 2017-03-25T00:01:43.528372912Z 2017/03/25 00:01:43 Listening on 0.0.0.0:5432 for id:id:db-id 2017-03-25T00:01:43.533379753Z 2017/03/25 00:01:43 Ready for new connections 2017-03-25T00:01:50.846499658Z 2017/03/25 00:01:50 New connection for " id:id:db-id" 2017-03-25T00:01:51.677064579Z 2017/03/25 00:01:51 New connection for " id:id:db-id" 2017-03-25T00:01:51.958285168Z 2017/03/25 00:01:51 New connection for " id:id:db-id" 2017-03-25T00:02:23.882565648Z 2017/03/25 00:02:23 Client closed local connection on 10.53.1.53:5432 2017-03-25T00:02:23.882597137Z 2017/03/25 00:02:23 Client closed local connection on 10.53.1.53:5432 2017-03-25T00:02:23.882601223Z 2017/03/25 00:02:23 Client closed local connection on 10.53.1.53:5432

Does anyone know what can cause this problem. Connection looks like working, because my orm initialise all tables, but then all connection suddenly dropped.

Getting Timeout Error after following Kubernetes example

Hi,

I created a GKE Cluster and followed the Kubernetes instructions:

Now I'm getting the following error (XXX:XXX is my Project and Instance Name):

Open socket for "XXX:XXX" at "/cloudsql/XXX:XXX"
Socket prefix: /cloudsql
Got a connection for "XXX:XXX"
couldn't connect to "XXX:XXX": Post https://www.googleapis.com/sql/v1beta4/projects/XXX/instances/XXX/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp: i/o timeout

Any idea where the timeout comes from?

"fusermount": executable not found in $PATH

Hello,

I am trying to start a docker container using the -fuse switch as follows:
docker run -it -v $(pwd)/cred.json:/secret/cred.json -v /cloudsql -v /etc/ssl/certs b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy -dir=/cloudsql -fuse -credential_file=/secret/cred.json
And I get the following error:
2016/07/11 16:24:48 Mounting /cloudsql... 2016/07/11 16:24:48 Could not start fuse directory at "/cloudsql": cannot mount "/cloudsql": fusermount: exec: "fusermount": executable file not found in $PATH

Could be missing dependencies in the docker image?
Thanks in advance.

Make logging configurable

When using the proxy with PHP (and no persistent connections) the chattiness of the proxy logs is overwhelming. Since I am running the proxy on Container Engine, those logs are being shipped to Cloud Logging and the amount of "new connection" and "connection closed" messages is significant.

It would be nice to disable logging of new and closed connections.

Listening on 0.0.0.0 doesn't work

I am simply using the sample configuration in the readme:

stephen@test-asyncjobs-001:/cloudsql$ sudo cloud_sql_proxy -dir=/cloudsql -instances=my-project:us-central1:sql-inst=tcp:0.0.0.0:3306
2016/07/22 11:46:11 listenInstance: "my-project:us-central1:sql-inst=tcp:0.0.0.0:3306"
2016/07/22 11:46:11 listen tcp: too many colons in address 127.0.0.1:0.0.0.0:3306

The address doesn't seem to be parsed correctly.

"connection timed out" errors

In my cloud_sql_proxy output I see fairly regular (every hour or so) messages like this one:

cloud_sql_proxy[2112]: 2017/05/05 21:38:42 Reading data from xxx:xxx had error: read tcp xxx.xxx.xxx.xxx:xxxxx->xxx.xxx.xxx.xxx:xxxx: read: connection timed out

These seem to match up with errors like this one in my Python code:

OperationalError: server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.

I have two similar GCE VMs in same region but different availability zones. Only one of them log these errors.

Are timeouts like this one expected? Any tips on troubleshooting them or working around them?

Cannot download 1.06 precompiled binary

In 1.06 release (https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases/tag/1.06),

(snip)
The precompiled binaries can be found at the standard locations:

But downloaded binary is 1.05, not 1.06.

$ curl -LO https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10.2M  100 10.2M    0     0  9664k      0  0:00:01  0:00:01 --:--:-- 9667k
$ sha256sum cloud_sql_proxy.linux.amd64
9c5eab456cebef14905f3f42d2ba59746fcc1e99b2db378b1b634fb77ed67b64  cloud_sql_proxy.linux.amd64
$ chmod +x cloud_sql_proxy.linux.amd64
$ ./cloud_sql_proxy.linux.amd64 --version
Cloud SQL Proxy: version 1.05; sha 0f69d99588991aba0879df55f92562f7e79d7ca1 built Mon May  2 17:57:05 UTC 2016

Where can I download 1.06 precompiled binaries?

Google Cloud SQL API Quotas apply; may kill your application

Connecting to a Google Cloud SQL instance via tcp_3306 is not counting towards any API quotas. Connecting to a Google Cloud SQL instance via the cloudsql-proxy however IS counting towards the Google Cloud SQL API Quotas. Specifically, there is a limit of 10k queries per day and a limit of 100 queries per 100 seconds per user. Queries in this case refer to API queries, not SQL queries.

An API query is not just consumed for creating a connection, but also for other interactions, depending on ones configuration; e.g. getting a list of instances when you do not have configured fixed instances.

We are currently learning this the hard way, by having our GCE hosted application being down because of an exceeded API limit. I am very disappointed that this is hitting us completely unexpected.

On the one hand the cloudsql-proxy is absolutely necessary when using autoscaling GCE instance groups (you cannot determine your GCE instance's IP addresses when machines are booted to meet a raising demand and thus cannot configure "Authorized networks" on the SQL instance. Or open ACLs to all IPs, which is no option either). On the other hand, switching from tcp_3306 connections to the cloudsql-proxy may silently kill your application if you don't closely monitor API usage.

The documentation of this project both here on Github as well as on the official Google sites should make this crystal clear. That is not the case today.

Add support for Unix sockets on Windows

On Windows 10 I have gcloud setup and working. I downloaded the binary and without any arguments (per Using automatic instance discovery with gcloud credentials) I saw:

2017/04/18 11:51:31 Using gcloud's active project: myproject-12345
2017/04/18 11:51:33 mkdir myproject-12345:us-central1:test: The filename, directory name, or volume label syntax is incorrect.
2017/04/18 11:51:33 errors parsing config:
        mkdir myproject-12345:us-central1:test: The filename, directory name, or volume label syntax is incorrect.

I guess this is to be expected as Windows does not support : in directory names, but perhaps by default the character should be - or changed for Windows specifically.

After changing the separator in code to - it worked but then I hit:

2017/04/18 11:58:57 Using gcloud's active project: myproject-12345
2017/04/18 11:58:59 errors parsing config:
        invalid "myproject-12345:us-central1:test": unsupported network: unix

And in code it specifically checks if OS is Windows and removes unix from supported list. I wonder if this could be default tcp on Windows?

In the end I got things to work with this: .\cloud_sql_proxy.exe -instances=myproject-12345:us-central1:test=tcp:5433.

Container Engine Permission Errors

I have been having issues with the proxy on Container Engine. I have a Cloud SQL database setup using SSL and allowed ip of 0.0.0.0/0. I want to be able to connect to it with my containers via a service in Kubernetes. I don't want to put a cloudsql-proxy in every one of my pods. Therefore, I believe I need to use the proxy with a TCP port instead of the UNIX socket. I have successfully created the service, secret, and replication controller, however the proxy is giving me the error:

the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credentials_file parameter

I have given it a credentials_file (the secret) and the service account has the Editor role assigned to it. I can run the proxy on my machine (OSX) with the credential file and it works perfectly. Below are my Kubernetes definitions:

Secret

apiVersion: v1
kind: Secret
metadata:
  name: sqlcreds
type: Opaque
data:
  file.json: "<base64encoded service key json file>"

Service

apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
    role: proxy

Replication Controller

apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql-proxy
  labels:
    app: mysql
    role: proxy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
        role: proxy
    spec:
      volumes:
      - name: secret-volume
        secret:
          secretName: sqlcreds
      - name: ssl-certs
        hostPath:
          path: /etc/ssl/certs
      containers:
      - name: proxy
        image: b.gcr.io/cloudsql-docker/gce-proxy
        command: ["/cloud_sql_proxy", "-dir=/cloudsql", "-credential_file=/secret/file.json", "-instances=<redacted>=tcp:3306"]
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: secret-volume
          mountPath: /secret/
        - name: ssl-certs
          mountPath: /etc/ssl/certs

I don't understand where I am going wrong here. I don't quite understand the '/etc/ssl/certs' directory, but I included it anyways. It seems to match your README however it is still getting the errors. I have read through the other issues here and there seems to be a couple of people reporting similar issues. Maybe the docker image hasn't been updated with the latest code? It says it was modified on Apr 18th if that helps.

oauth2: cannot fetch token

Our GKE based webapp is unable to connect to Google Cloud SQL right now. I have no idea why this suddenly started happening. We haven't touched the GKE deployment in several days.

Our logs show everything being OK until 14:30:12 CEST today (2017-05-25), when all connections fail with this error:
2017/05/25 15:58:37 couldn't connect to "****:europe-west1:****": Post https://www.googleapis.com/sql/v1beta4/projects/*****/instances/****/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp: i/o timeout

We are running cloudsql-proxy through the image b.gcr.io/cloudsql-docker/gce-proxy:1.05
I can succesfully connect manually with cloud_sql_proxy from my machine.

I realize this might not be a cloudsql-proxy issue, but any pointers to help me out would be much appreciated.

"Embeddable" mode for Go applications?

I've written a bit of a hack that allows me to embed the Cloud SQL Proxy into my Go application, so I don't need to set up or run a separate service from my application. I'm planning on releasing this as an open source library, but then I realized: it would be better if the cloudsql-proxy project supported this natively.

Would the maintainers of this project be interested in working with me to merge some patches to make this easy/possible?

If so, I'll try to clean my patch up, sign the CLA, and submit a pull request. If not, I'll release it as my own open source library. Thanks!

Fail on multiple replicas for 1.09

Using the example from Kubernetes.md, I've created a deployment with 2 replicas of the cloudsql-proxy. With versions 1.05 and 1.08, it works. With version 1.09 it fails if replicas is greater than 1 with this message

Error from server (BadRequest): container "cloudsqlproxy" in pod "cloudsqlproxy-3735439449-p58dv" is waiting to start: trying and failing to pull image

The kubernetes dashboard shows this error:

Failed to pull image "b.gcr.io/cloudsql-docker/gce-proxy:1.09": failed to register layer: rename /var/lib/docker/image/overlay/layerdb/tmp/layer-289391082 /var/lib/docker/image/overlay/layerdb/sha256/305e4867f6737b619a7ab334876d503b12fa391a3d28478752575b49d6857e69: directory not empty
Error syncing pod, skipping: failed to "StartContainer" for "cloudsqlproxy" with ErrImagePull: "failed to register layer: rename /var/lib/docker/image/overlay/layerdb/tmp/layer-289391082 /var/lib/docker/image/overlay/layerdb/sha256/305e4867f6737b619a7ab334876d503b12fa391a3d28478752575b49d6857e69: directory not empty"

Setting replicas to 1 or using version 1.08 both make it work.

Example config:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cloudsqlproxy
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: cloudsqlproxy
    spec:
      containers:
      - image: b.gcr.io/cloudsql-docker/gce-proxy:1.09
        name: cloudsqlproxy
        command:
        - /cloud_sql_proxy
        - -dir=/cloudsql
        - -instances=project:us-central1:db-test=tcp:3306
        - -credential_file=/credentials/credentials.json
        ports:
        - name: sqlpxy-prt-wp
          containerPort: 3306
        volumeMounts:
        - mountPath: /cloudsql
          name: cloudsql
        - mountPath: /credentials
          name: service-account-token
          readOnly: true
        - mountPath: /etc/ssl/certs
          name: ssl-certs
          readOnly: true
      volumes:
      - name: cloudsql
        emptyDir:
      - name: service-account-token
        secret:
          secretName: cloudsql-instance-credentials
      - name: ssl-certs
        hostPath:
          path: /etc/ssl/certs

CloudSQL Proxy Fails after Short Period of Time (~12 hours or so)

We are seeing an issue where the cloud SQL proxy will fail after a short period of time and has to be restarted. This is leading to "could not connect" errors in our apps. We are running the command in a startup bash script, like this:

#! /bin/bash
# starts proxy
nohup /usr/local/bin/cloud_sql_proxy -instances=instance_string=tcp:3306 > /var/log/cloudproxy/fulllog.log 2>&1 &

Is there any reason this process would stop authenticating randomly? A ps aux | grep proxy still shows the process running, but the server ceases to authenticate. Is there any logging I can perform to find the source of this issue? The fulllog.log above was temporary, but just shows lots of connects and disconnects.

CloudSQL Proxy 175% slower than direct connection

Is the proxy expected to be 175% slower than a direct connection?

Every hour I'm also experiencing some spikes. The first query takes up to 1 second. Could it be some kind of reauthentication?

In my small test, the first query (which creates the connection) takes 5ms without proxy and >15ms with proxy.

Can I do any configuration to reduce the latency?

Include Dockerfile for cloudsql-docker

I can't seem to find the Dockerfile or image components for the Docker image compiled from this repo and available at "b.gcr.io/cloudsql-docker/gce-proxy", so those of us using this on GCE as the only way to access Cloud SQL securely are unable to build an image from specific commits so we can for instance rollback to previous non-production breaking changes. Kindly share the Dockerfile or otherwise maintain a version system for the images corresponding to the commits on GitHub.

Unable to use on windows

When ever the command is run, the following error shows up - "listen tcp 127.0.0.1:3306: bind: An attempt was made to access a socket in a way forbidden by its access permissions."

Not able to do anything further, sure the devs would have come across this on windows. Is there a solution ?

Unable to use -credential_file on GCE

This was working for me last week. I suspect it could be related to the scope check fix in #21

The check for onGCE in checkFlags comes before anything would look at the tokenFile provided in -credential_file (or a -token or GOOGLE_APPLICATION_CREDENTIALS for that matter). So if the default service account does not have the sqlservice.admin scope those other auth methods will never be checked.

I'm running in Kubernetes and would prefer to only give access to the pods that need it.

Improve logging

There are probably a lot of verbose logs which do not need to be printed during normal scenarios. It'd be nice to clean this up so that users don't have to ignore a lot of logs that are mostly useful during development.

An option would be to introduce a 'verbose' flag to turn on the more verbose logs.

Some things to clean up:

  • During the creation of unix sockets, oftentimes a message will be printed complaining that the socket was not found before it was printed; the proxy should at least ignore errors caused by ENOENT
  • Clarify errors from disconnect: it'd be good to know whether the local or remote connection was closed first
  • Remove logs from new connections ("Got connection for A_PATH"); it's probably not very useful.

IPv4 addresses still need to be authorized after connecting through cloudsql-proxy

We have a Python application in a GKE cluster that connects to a CloudSQL instance trough cloudsql-proxy. Our connections are effectively going through the proxy, I can see traffic from and to the cloudsqlproxy container and the logs confirm this.

The cloudqslproxy has access granted from the cloudsqlproxy~% hostname and no password.

This is how our Django app config file looks like:

    DATABASES = {
        'default': {
            'ENGINE'    : 'django.db.backends.mysql',
            'NAME'      : 'dbname', 
            'USER'      : 'cloudsqlproxy',
            'PASSWORD'  : '', 
            'HOST'      : '127.0.0.1',
            'PORT'      : '',
        }
    }

And everything works... as long as our compute engine instances hosting the container IP addresses are white listed in SQL Cloud authorized networks section. We have auto-scaling setup and our VM instances are recycled on a regular basis, causing outages because of the new IPs not being whitelisted.

The whole point of clousql-proxy is not needing to grant access from specific IP addresses and this is not working for us at the moment. We are definitely doing something wrong and any help that lead us to find what it is would be greatly appreciated.

Google cloud sql using wrong ssl cert to to establish socket connection from kubernetes

We have 2 clusters, a staging and production cluster. We setup our production cluster to use cloud-sql as a pod attached to our app pods to connect to the cloud sql servers.

When we stood up the staging cluster, everything worked except that our apps cloud-sql proxies are throwing this error, all the secrets and associated paths have been changed for the staging cluster.

couldn't connect to " project-name:us-central1:name-db-staging": x509: certificate is valid for project-name:name-db, not project-name:name-db-staging

This is happening for two separate projects, both are saying the production ssl cert is being used, but the db it is trying to connect to is the staging server

Connecting to a specific db

Not an issue, more of a recommendation.

I think it's worth updating the README to explain that the user must set the db manually when using from within a Go program.

Rather than the typical:

db, err = sql.Open("mysql", sqluser+":"+sqlpass+"@tcp("+sqlhost+":"+sqlport+")/"+sqldb+"?charset=utf8&parseTime=true&allowAllFiles=true")

It's required to connect to USE a db like this, after calling DialPassword:

_, err = e.Exec("USE " + sqldb)
if err != nil {
	panic(err)
}

Unless I've totally missed the point, in which case, can someone explain the correct way to dial.

Cannot Connect to Cloud SQL from Compute Engine via Proxy using PHP/PDO

I am having an issue connecting to the proxy with PHP/PDO. I followed all the instructions on this page https://cloud.google.com/sql/docs/compute-engine-access and I am able to connect with mysql -u root -p -S /cloudsql/projectid:region:instance and confirm with show databases;. Now my issue when I connect with the PDO I get :

SQLSTATE[HY000] [2002] Permission denied

The DSN I am trying to connect with is mysql:unix_socket=/cloudsql/<projectid>:<region>:<instance>;dbname=<dbname>;charset=utf8. with password.

How did you get this to work with PHP? I am able to confirm that the proxy is connecting in terminal. I am trying to get this to work with PDO running on CentOS. Is this related to Issue: #7?

Thanks

Feature: Connection Pooling

@Carrotman42 pointed out in issue #87 that connection pooling might improve connection latency.

Is it possible that the cloudsql-proxy can handle this itself? If is it best practice it would benefit a lot of projects/users.

How to connect using a service account credentials file using the go library

I'm trying to make the go dialer work (from https://godoc.org/github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/mysql) from outside of GCE as well as inside kubernetes. I can't use the default service account of the Compute Engine VM, since sometimes I don't run it there. Until now I've been running the proxy on the same machine as the go routine, and then connecting using mysql to 127.0.01:3306 but I'd like to switch to the provided proxy dialer instead.

The example usage from https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/tests/dialers_test.go doesn't show how to pass service account credentials.

Any thoughts?

How can I debug notAuthorized errors?

I'm trying to connect to a postgresql instance by using psql -h 127.0.0.1, but that fails with:

psql: server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.

And in the cloud sql logs I see:

2017/04/05 08:38:23 using credential file for authentication; [email protected]
2017/04/05 08:38:23 Listening on 127.0.0.1:5432 for myproject-157612:europe-west1-c:myproject
2017/04/05 08:38:23 Ready for new connections
2017/04/05 08:39:09 New connection for "myproject-157612:europe-west1-c:myproject"
2017/04/05 08:39:09 couldn't connect to "myproject-157612:europe-west1-c:myproject": ensure that the account has access to "myproject-157612:europe-west1-c:myproject" (and make sure there's no typo in that name). Error during createEphemeral for myproject-157612:europe-west1-c:myproject: googleapi: Error 403: The client is not authorized to make this request., notAuthorized

The service account I'm using has the CloudSQLAdmin role. I have doublechecked that I have no typos in my project name, and have inspected the credentials file that is mounted into the cloudsql pod that it contains the service account credentials that I'm expecting.

Doc says Kubernetes needs Service Account but it doesnt

The docs says that you have to setup a service account and download the json file and use it in the proxy container as secret.

However, when using this on Google Container Engine you actually do not need any service account json file. The Proxy uses the default service account given by the Google Compute Engine Instance the pod is running on. So if the Cloud SQL API is enabled and the container cluster was configured with full access, then there is no need.

Maybe a short info would be helpful :-)

Proxy crashing

Im trying to connect to cloud sql from a compute engine instance in my python application. Im running the proxy like this:

nohup ./cloud_sql_proxy -dir=/cloudsql --instances=my-project:us-central1:my-sql-instance=tcp:3307 &

I can successfully connect using the MySql client:

mysql -u root --host 127.0.0.1 --port 3307 -p
mysql> select 1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.00 sec)

I can also connect from a python repl:

>>> from sqlalchemy import create_engine
>>> from sqlalchemy import text
>>> engine = create_engine('mysql+pymysql://root:password@localhost:3307/db')
>>> engine.execute(text("SELECT 1")).fetchone()
(1,)

But, when I run this same code from my application I'll get Network Unreachable errors:

  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1990, in execute
    connection = self.contextual_connect(close_with_result=True)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2039, in contextual_connect
    self._wrap_pool_connect(self.pool.connect, None),
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2078, in _wrap_pool_connect
    e, dialect, self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception_noconnection
    exc_info
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2074, in _wrap_pool_connect
    return fn()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 376, in connect
    return _ConnectionFairy._checkout(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 713, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 480, in checkout
    rec = pool._do_get()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 1060, in _do_get
    self._dec_overflow()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 1057, in _do_get
    return self._create_connection()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 449, in __init__
    self.connection = self.__connect()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 607, in __connect
    connection = self.__pool._invoke_creator(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect
    return dialect.connect(*cargs, **cparams)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 385, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/__init__.py", line 88, in Connect
    return Connection(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 679, in __init__
    self.connect()
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 922, in connect
    raise exc
OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 101] Network is unreachable)")

When I ssh back to the box and check the running processes the Cloud SQL proxy is no longer running.

pgrep cloud_sql_proxy
# No longer return a process id

My application is a script so its doing a lot of SQL work so maybe its overloading the proxy? How can I figure out the issue?

Incorrect permission checking

When installing the current binary from https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 and running it on a Google Cloud Compute Instance, I get the following error message:

2016/04/17 10:43:57 the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credentials_file parameter

The compute instance has been created with

gcloud compute instances create "$instancename"  --boot-disk-device-name "$instancename" --image "https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1404-trusty-v20160114e" --zone "europe-west1-b" --machine-type "n1-standard-1" --network "default" --maintenance-policy "MIGRATE" --scopes default="https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/sqlservice.admin","https://www.googleapis.com/auth/sqlservice","https://www.googleapis.com/auth/cloud.useraccounts.readonly","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management","https://www.googleapis.com/auth/devstorage.read_only" --boot-disk-size "10" --boot-disk-type "pd-standard"

Note the

--scopes default=[..]"https://www.googleapis.com/auth/sqlservice.admin",[..]

The API has been enabled on project level.

It turns out that /cmd/cloud_sql_proxy/cloud_sql_proxy.go is referencing the non-existent scope https://www.googleapis.com/auth/sqladmin

Missing previous releases of cloudsql-proxy

Hey,

I'm trying to deploy cloudsql-proxy, but one of the issues I'm facing is managing releases.

It appears that you only expose one version at a given time (only the tag for 1.05 is visible) which makes it really hard to reproduce a build for an old version.

It also appears that the binaries you provide does not contain the version number, I assume this means that they are always the 'latest'. I'd like to package one specific version and in order to do it securely I want to be able to verify its checksum. If this changes over time (a new version replacing the old one) this will no longer work.

1) 'd like to ask that you provide a URL for downloading the static binaries which include the version number. Guarantee some kind of retention for old versions (e.g. 6 months).
The URLs I'm referring to right now are the ones listed here:
https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases

2) Can we guarantee that tags for old releases remain available after a new release is made available. This would allow us to setup a build for the project, but without it, it would be something we would have to attempt to maintain ourselves.

3) What guarantees do you intend to provide between Major vs. Minor version bumps? You do not appear to use semantic versioning right now so this is unclear.

Thank you for your time!

cannot fetch token: Post https://accounts.google.com/o/oauth2/token: x509: failed to load system roots and no roots provided

I'm trying to connect to a postgresql cloudsql instance, from a kubernetes pod, but when I try to establish a connection with psql -h 127.0.0.1, I get:

psql: server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.

Looking at the logs of the cloudsql pod, I see:

2017/04/04 13:15:53 New connection for "myproject-157612:europe-west1-c:myproject"
2017/04/04 13:15:53 couldn't connect to "myproject-157612:europe-west1-c:myproject": Post https://www.googleapis.com/sql/v1beta4/projects/myproject-157612/instances/myproject/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: x509: failed to load system roots and no roots provided

I'm using cloudsqlproxy 1.07, with the following deployment spec:

- image: gcr.io/cloudsql-docker/gce-proxy:1.07
    name: cloudsql
    command: ["/cloud_sql_proxy", "-dir=/cloudsql", "-credential_file=/secret/cloud-sql.json", "-instances=myproject-157612:europe-west1-c:myproject=tcp:5432"]
    volumeMounts:
        - name: cloudsql
          mountPath: /cloudsql
        - name: sql-proxy-secret
          mountPath: /secret/

Am I supposed to mount some SSL certificates inside the cloud sql pod?

Invalid JWT: Token must be a short-lived token and in a reasonable timeframe

Can anyone tell me what i'm doing wrong?

I want to use the proxy in a Kubernetes cluster out of GCE using a service account and your docker image:

docker run -ti -p 3306:3306 -v `pwd`/file.json:/file.json -v `pwd`/ssl-certs:/etc/ssl/certs gcr.io/cloudsql-docker/gce-proxy:1.06 /cloud_sql_proxy -credential_file=/file.json -instances=<myproject>:us-central1:<instance-name>=tcp:0.0.0.0:3306

And then when I try to connect using mysql client, cloud sql proxy binary outputs this:

2017/02/10 14:09:15 using credential file for authentication; email=...@...
2017/02/10 14:09:15 Listening on 0.0.0.0:3306 for <project>:us-central1:<instance-name>
2017/02/10 14:09:15 Ready for new connections
2017/02/10 14:09:41 New connection for "<project>:us-central1:<instance-name>"
2017/02/10 14:09:42 couldn't connect to "<project>:us-central1:<instance-name>": Post https://www.googleapis.com/sql/v1beta4/projects/<project>/instances/<instance-name>/createEphemeral?alt=json: oauth2: cannot fetch token: 400 Bad Request
Response: {
  "error" : "invalid_grant",
  "error_description" : "Invalid JWT: Token must be a short-lived token and in a reasonable timeframe"
}

Thanks in advance,

Client fails on createEphemeral

I'm getting an odd failure from following these instructions:

https://github.com/GoogleCloudPlatform/cloudsql-proxy#to-use-from-kubernetes

2016/05/17 23:59:41 Listening on 127.0.0.1:3306 for <project>:<region>:<instance>
2016/05/17 23:59:41 Ready for new connections
2016/05/17 23:59:51 New connection for "<project>:<region>:<instance>"
2016/05/17 23:59:52 couldn't connect to "<project>:<region>:<instance>": ensure that the account has access to "<project>:<region>:<instance>" (and make sure there's no typo in that name). Error during createEphemeral for <project>:<region>:<instance>: googleapi: Error 403: The client is not authorized to make this request., notAuthorized

Looking at the code, createEphemeral is failing, but I'm at a loss as to what is going on with the randomized key that's being created there.

high cpu usage

i start the sql proxy using the following command:

exec cloud_sql_proxy -dir=/cloudsql -instances=project:database

When the website is loaded the CPU usage of the proxy rises exponentially. Is there any way around this?

Start proxy within a Node/Express application, and run on a client in Heroku

Is there a way to run the scripts to download the proxy, have Google authentication set up, and begin listening for connections from an application in Heroku with the appropriate credentials? I have an application in Heroku with Express routes that connect to a Google Cloud SQL instance, and I would like to use the cloud_sql_proxy to connect it.

I can run the application locally, starting the proxy locally and listening for events. It seems to me that the proxy needs to run on the same machine/instance as the application, so it would seem to have to run in the cloud with Heroku. If so, how do I set up the proxy invocation to do so? Should it be invoked within Node/Express?

Is an instance a container?

It's not clear for me in project1:region:instance1 what to put in instance.

I am using kubernetes with 2 nodes, several services, several deployments. One of the replica sets of one of the deployments has a pod with 2 containers: cloudsql-proxy and my app container

Is it possible to clarify this in docs?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.