Comments (24)
In my case cloud sql proxy is way too slow as compare to direct connection using ip whitelisting .
I wrote a small api where I was starting a transaction and committing it . I deployed this app first on Google Container Engine and was connecting to Cloud Sql using proxy , second I deployed it on Google Compute Engine and was connecting to Cloud sql via ip whitelisting . Below are the results
App Deployed on | Number of Requests | 99th percentile | Connecting Via |
---|---|---|---|
GKE | 20140 | 113ms | Cloud Sql Proxy |
GCE | 20895 | 13ms | IP whitelisting |
I am using hibernate to interact with mysql and using below config to connect to mysql for both deployments :
database:
driverClass: com.mysql.jdbc.Driver
# the JDBC URL
url: jdbc:mysql://DB_IP/DB_NAME?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "SELECT 1"
# the minimum number of connections to keep open
minSize: 8
# the maximum number of connections to keep open
maxSize: 42
properties:
charSet: UTF-8
hibernate.show_sql: false
hibernate.hbm2ddl.auto: validate
hibernate.session.events.log: false
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
maxConnectionAge : 10s
checkConnectionOnBorrow: true
from cloud-sql-proxy.
Finally got a solution with socket factory .We loaded the service account cloud sql permissions as a secret. Then we used the GOOGLE_APP_CREDENTIALS env variable for the pod and used the socket factory to connect to the instance with connection pooling .
from cloud-sql-proxy.
@vamsipkris Sorry for the delay but this is what we did .
-
We created a service account with cloud sql client permission .
-
Download the .json file and use it to create a secret like this
kubectl create secret generic cloudsql-instance-credentials \ --from-file=credentials.json=[actual path to service account json file]
. -
Mount the json file to your container volume
env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /secrets/cloudsql/mysql-sa.json volumeMounts: - name: service-account-credentials-volume mountPath: /secrets/cloudsql readOnly: true volumes: - name: service-account-credentials-volume secret: secretName: cloudsql-instance-credentials items: - key: credentials.json path: mysql-sa.json
Notice this env variable GOOGLE_APPLICATION_CREDENTIALS . it is needed for socket factory to work so socket factory will load the service account using this variable to authenticate with cloud sql .
4) <!-- https://mvnrepository.com/artifact/com.google.cloud.sql/mysql-socket-factory --> <dependency> <groupId>com.google.cloud.sql</groupId> <artifactId>mysql-socket-factory</artifactId> <version>1.0.5</version> <exclusions> <exclusion> <groupId>com.google.guava</groupId> <artifactId>guava-jdk5</artifactId> </exclusion> </exclusions> </dependency>
Load socket factory in your pom.xml and make the connection using socket factory with connection pooling .
from cloud-sql-proxy.
@hfwang Our problem was not related to the proxy.
The actual problem on our side was that the Java MySQL driver does not send batch requests to the server even if you use the JDBC batching support. Therefore the driver sent 1000s of single INSERT / UPDATE statements to the server instead of a single one.
We've solved the problem by using the MySQL's JDBC Driver's rewriteBatchedStatements=true
option.
from cloud-sql-proxy.
from cloud-sql-proxy.
from cloud-sql-proxy.
Thanks for your suggestion @Carrotman42 but unfortunately PHP does not support connection pooling unless using the ODBC driver which I guess is more inefficient.
The spike every hour is my primary problem as it is triggering my monitoring alerts. Maybe these details might be helpful:
The spike happens for all proxy containers every hour blocking 1 or more queries for ~1 second. The query doesn't get catch by the MySQL slow log.
Note: The comparison might not be truly fair as I was using a non-secure direct connection. The general overhead could be caused by the encryption? By removing the first query from the result makes the difference smaller: 22ms vs 31ms for 29 simple select queries.
from cloud-sql-proxy.
Did you do a "warm up" before starting your benchmark? There is some overhead when the first connection is opened. Would be nice to know if the latency is attributable to that. Do you have other percentiles, like 95%?
P.S. there's a native java library that doesn't require installing the go proxy: https://github.com/GoogleCloudPlatform/cloud-sql-mysql-socket-factory
from cloud-sql-proxy.
@Laixer
Yes I warmed up the set up before doing my test . It was not a performance test .I was sending 2 requests per seconds only . Latency is not attributed in these metrics as these metrics have been captured at server side .
After test I have deleted kubernetes cluster so I don't have exact 95th percentile data but it was also around 99th % , (-10 ms ) .
I am getting Insufficient Permission error while using cloud-sql-mysql-socket-factory on kubernetes cluster and reason behind this error as per my understanding is that this library uses application default credentials which are not provided by kubernetes cluster .
from cloud-sql-proxy.
from cloud-sql-proxy.
@rigalrock application default credentials support reading credentials from a file [1]. assuming you already have a secret mounted with the credentials, you can point GOOGLE_APPLICATION_CREDENTIALS to that file and the library should work
[1] https://developers.google.com/identity/protocols/application-default-credentials#howtheywork
from cloud-sql-proxy.
Closing due to lack of updates. Please feel free to reopen the issue if there are any other questions. I believe the summary here is "the Proxy has high new connection latency, but when using connection pooling the latency should not be significantly higher than using native mysql SSL connectivity"
from cloud-sql-proxy.
We are writing from a Java application, running in Kubernetes to CloudSQL. Both, the Kubernetes cluster and the CloudSQL instance are running in the same region (europe-west-1b). Our application is using the CloudSQL proxy which is running along with our POD.
Unfortunately, the write access (read not measured yet) is around 20 times slower than when I run it on my local machine, which has a comparable installation, but without Kubernetes (Java Process communicating with a local MySQL, SSD). We are using connection pooling, so the problem does not seem to be related to creating new connections.
The amount of data sent is around 1,5 MB and if I just import the SQL file into CloudSQL using my tool of choice (which connects without the proxy) it is super fast.
This is an excerpt of our Java Application's Kubernetes YAML file
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=xxx:europe-west1:xxx=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
We also use this configuration file
{
"type": "service_account",
"project_id": "xxx",
"private_key_id": "xxx",
"private_key": "-----BEGIN PRIVATE KEY-----xxxxxxxxx-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "xxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/xxx%40xxx.iam.gserviceaccount.com"
}
from cloud-sql-proxy.
Have you tried the native Java library?
https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory
from cloud-sql-proxy.
No, I did not try that out yet. I will give it a try and come back to - but it may take some time (~2-3 weeks).
from cloud-sql-proxy.
@prismec pls did you find a solution ?. I'm having a similar issue and I'm quite confused. It connects but after some time it takes time to reconnect again .
from cloud-sql-proxy.
@boogie4eva Can you please share the YAML files, and the connection strings for our reference?
from cloud-sql-proxy.
@vamsipkris will do once I get to my system
from cloud-sql-proxy.
@prismec, I'm curious what the bottleneck is that is resulting in the limited write throughput, is the proxy client using significant CPU? The main difference I can think of between the Java socket factory vs the proxy is different SSL libraries and communication overhead from the socket...
from cloud-sql-proxy.
@hfwang The proxy uses a side car pattern .The response time using the proxy can be quite high . In my case sometimes queries times out and it takes time to reconnect again. I read this is because the proxy has to reauthenticate with cloud sql at intervals. This can be a pain in the neck for your user experience .Anyways with socket factory we don't have these issues .
from cloud-sql-proxy.
from cloud-sql-proxy.
@Carrotman42 There is a huge difference from my personal observation . Like i stated earlier for me the proxy option causes delays in response times .
from cloud-sql-proxy.
Thanks for that update!
from cloud-sql-proxy.
from cloud-sql-proxy.
Related Issues (20)
- Add support for a lazy refresh
- pgbouncer + proxy with transaction pooling is slow HOT 23
- Is there any way to connect to MS SQL using domain credentials? HOT 7
- v2/tests: TestSQLServerAuthentication failed HOT 4
- Telemetry doesn't work with non-ADC
- v2/internal/proxy: TestCheckConnections failed HOT 2
- Connecting to cloud-sql using private-ip sometimes fails with a TLS handshake timeout HOT 7
- Availability of a Container Image on Google Artifact Registry HOT 2
- "Cloud SQL IAM service account authentication failed for user ..." intermittent errors when connecting to Postgres HOT 17
- CSQL_PROXY_ADDRESS requires an IP address but doesn't explicitly state so HOT 6
- 30s+ Hang When Using Manual Token Authentication HOT 7
- v2/internal/proxy: TestClientLimitsMaxConnections failed HOT 1
- v2/internal/proxy: TestClientCloseWaitsForActiveConnections failed HOT 1
- v2/internal/proxy: TestClientClosesCleanly failed HOT 1
- v2/internal/proxy: TestClosesWithError failed HOT 2
- v2/internal/proxy: TestClientConnCount failed HOT 2
- v2/internal/proxy: TestRunConnectionCheck failed HOT 2
- Automatic instance discovery isn't supported in v2 Proxy HOT 5
- Brief summary of the proposed feature
- Proxy process should exit when a FUSE instance gets an error when accepting a connection HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cloud-sql-proxy.