Cloud Spanner is the world's first fully managed relational database service
to offer both strong consistency and horizontal scalability for
mission-critical online transaction processing (OLTP) applications. With Cloud
Spanner you enjoy all the traditional benefits of a relational database; but
unlike any other relational database service, Cloud Spanner scales horizontally
to hundreds or thousands of servers to handle the biggest transactional
workloads.
Install this library in a virtualenv using pip. virtualenv is a tool to
create isolated Python environments. The basic problem it addresses is one of
dependencies and versions, and indirectly permissions.
With virtualenv, it's possible to install this library without needing system
install permissions, and without clashing with the installed system
dependencies.
Generally, to work with Cloud Spanner, you will want a transaction. The
preferred mechanism for this is to create a single function, which executes
as a callback to database.run_in_transaction:
# First, define the function that represents a single "unit of work"# that should be run within the transaction.defupdate_anniversary(transaction, person_id, unix_timestamp):
# The query itself is just a string.## The use of @parameters is recommended rather than doing your# own string interpolation; this provides protections against# SQL injection attacks.query="""SELECT anniversary FROM people WHERE id = @person_id"""# When executing the SQL statement, the query and parameters are sent# as separate arguments. When using parameters, you must specify# both the parameters themselves and their types.row=transaction.execute_sql(
query=query,
params={'person_id': person_id},
param_types={
'person_id': types.INT64_PARAM_TYPE,
},
).one()
# Now perform an update on the data.old_anniversary=row[0]
new_anniversary=_compute_anniversary(old_anniversary, years)
transaction.update(
'people',
['person_id', 'anniversary'],
[person_id, new_anniversary],
)
# Actually run the `update_anniversary` function in a transaction.database.run_in_transaction(update_anniversary,
person_id=42,
unix_timestamp=1335020400,
)
Select records using a Transaction
Once you have a transaction object (such as the first argument sent to
run_in_transaction), reading data is easy:
# Define a SELECT query.query="""SELECT e.first_name, e.last_name, p.telephone FROM employees as e, phones as p WHERE p.employee_id == e.employee_id"""# Execute the query and return results.result=transaction.execute_sql(query)
forrowinresult.rows:
print(row)
Insert records using Data Manipulation Language (DML) with a Transaction
Use the execute_update() method to execute a DML statement:
Connection API represents a wrap-around for Python Spanner API, written in accordance with PEP-249, and provides a simple way of communication with a Spanner database through connection objects:
fromgoogle.cloud.spanner_dbapi.connectionimportconnectconnection=connect("instance-id", "database-id")
connection.autocommit=Truecursor=connection.cursor()
cursor.execute("SELECT * FROM table_name")
result=cursor.fetchall()
Aborted Transactions Retry Mechanism
In !autocommit mode, transactions can be aborted due to transient errors. In most cases retry of an aborted transaction solves the problem. To simplify it, connection tracks SQL statements, executed in the current transaction. In case the transaction aborted, the connection initiates a new one and re-executes all the statements. In the process, the connection checks that retried statements are returning the same results that the original statements did. If results are different, the transaction is dropped, as the underlying data changed, and auto retry is impossible.
Auto-retry of aborted transactions is enabled only for !autocommit mode, as in autocommit mode transactions are never aborted.
test_list_backups is failing due to a backup from a different test being included in the returned list for size_bytes
The first problem can be resolved by increasing the timeout for UpdateBackup.
The second problem is difficult to replicate and the exact cause is unclear given that the tests are not run in parallel and the backups are being deleted at the end of each test. The simplest solution will be to modify the test to ensure that no backups from previous tests meet the condition.
state = <grpc._channel._RPCState object at 0x7fa82980acf8>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fa829a5a508>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.INVALID_ARGUMENT
E details = "Invalid ListBackupOperations request."
E debug_error_string = "{"created":"@1602062355.673804195","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Invalid ListBackupOperations request.","grpc_status":3}"
E >
backup_sample.py:133: in list_backup_operations
for op in operations:
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/page_iterator.py:212: in _items_iter
for page in self._page_iter(increment=False):
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/page_iterator.py:243: in _page_iter
page = self._next_page()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/page_iterator.py:534: in _next_page
response = self._method(self._request)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in call
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
on_error=on_error,
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:184: in retry_target
return target()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid ListBackupOp...c/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Invalid ListBackupOperations request.","grpc_status":3}"
???
E google.api_core.exceptions.InvalidArgument: 400 Invalid ListBackupOperations request.
By chance I've caught my eye on broken issue URLs in CHANGELOG.md (only in 1.14.0 release). For example:
This one leads to python-spanner/issues/10183, but in fact original issue is google-cloud-python/pull/10183
@larkee, PTAL. It's not a problem to fix couple of links, but I assume it was done by release tool or some kind of a script, so it can repeat in future
This is an issue that has plagued me for a while but I just got the time to make a repro.
Basically, if I try for example to invoke Transaction.execute_sql and do NOT consume the result e.g.
txn.execute_sql('DELETE from T1 WHERE 1=1')
instead of
res=txn.execute_sql('DELETE from T1 WHERE 1=1')
_=list(res)
then the table will NOT be purged.
Seems like a bug to me with the underlying gRPC library, but it would be useful to explicitly document/call-out this bug if we don't have the bandwidth to fix it, to avoid unexpected problems for customers. It definitely sunk some hours for me in the past and also just right now.
Add support for the three new client_options quota_project_id, scopes, and credentials_file. These options are implemented on the mirogenerator, so the completion of the migration may automatically fulfill this.
I am currently dealing with a situation where a Transaction might have been rolled back but the exception wasn't directly passed back to me as per
======================================================================
ERROR: test_concurrent_delete_with_save (basic.tests.ConcurrentSaveTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_channel.py", line 565, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Cannot start a read or query within a transaction after Commit() or Rollback() has been called."
debug_error_string = "{"created":"@1580864794.999511000","description":"Error received from peer ipv6:[2607:f8b0:4007:803::200a]:443","file":"src/core/lib/surface/call.cc","file_line":1046,"grpc_message":"Cannot start a read or query within a transaction after Commit() or Rollback() has been called.","grpc_status":9}">
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/spanner/dbapi/cursor.py", line 89, in execute
self.__handle_insert(self.__get_txn(), sql, args or None)
File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/spanner/dbapi/cursor.py", line 139, in __handle_insert
param_types=param_types,
File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/spanner/dbapi/cursor.py", line 356, in handle_txn_exec_with_retry
return txn_method(*args, **kwargs)
File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
metadata=metadata,
File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
request, retry=retry, timeout=timeout, metadata=metadata
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/retry.py", line 277, in retry_wrapped_func
on_error=on_error,
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/retry.py", line 182, in retry_target
returntarget()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.FailedPrecondition: 400 Cannot start a read or query within a transaction after Commit() or Rollback() has been called.
and I see that we exported the attribute committed as per
OS type and version: standard CircleCI docker image circleci/python:3.6.1, running on Linux 558edcd72a3d 4.15.0-1052-aws googleapis/google-cloud-python#54-Ubuntu SMP Tue Oct 1 15:43:26 UTC 2019 x86_64 Linux.
Python version: 3.6.1
Using google-cloud-spanner library 1.13.0.
This sampledb integration test creates a new database, with a name including the current time down to second resolution.
The test is not invoked in parallel, so this database creation should never fail due to an already existing database of the same name. However, this error did occur, as the log below shows -- maybe that's a bug in the retry implementation in the library?
#!/bin/bash -eo pipefail
. venv/bin/activate
pytest
============================= test session starts ==============================
platform linux -- Python 3.6.1, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
rootdir: /home/circleci/repo
collected 1 item
batch_import_test.py F [100%]
=================================== FAILURES ===================================
______________________________ test_batch_import _______________________________
args = (parent: "projects/cloudspannerecosystem/instances/***************************"
create_statement: "CREATE DATABASE `sa...ore, url)"
extra_statements: "\n\nCREATE INDEX StoriesByTitleTimeScore ON stories(title) STORING (time_ts, score)\n"
,)
kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/cloudspannerecosystem/instances/***************************/d...ion-test'), ('x-goog-api-client', 'gl-python/3.6.1 grpc/1.26.0 gax/1.15.0 gapic/1.13.0 gccl/1.13.0')], 'timeout': 60.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
venv/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:57:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fbb3527c4a8>
request = parent: "projects/cloudspannerecosystem/instances/***************************"
create_statement: "CREATE DATABASE `sam...score, url)"
extra_statements: "\n\nCREATE INDEX StoriesByTitleTimeScore ON stories(title) STORING (time_ts, score)\n"
timeout = 60.0
metadata = [('google-cloud-resource-prefix', 'projects/cloudspannerecosystem/instances/***************************/databases/samp...ontinuous-integration-test'), ('x-goog-api-client', 'gl-python/3.6.1 grpc/1.26.0 gax/1.15.0 gapic/1.13.0 gccl/1.13.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
venv/lib/python3.6/site-packages/grpc/_channel.py:824:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7fbb352144a8>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fbb35210088>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Database already exists: projects/cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24"
E debug_error_string = "{"created":"@1579392565.335114093","description":"Error received from peer ipv4:172.217.13.74:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Database already exists: projects/cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24","grpc_status":6}"
E >
venv/lib/python3.6/site-packages/grpc/_channel.py:726: _InactiveRpcError
The above exception was the direct cause of the following exception:
def test_batch_import():
instance_id = os.environ['SPANNER_INSTANCE']
# Append the current timestamp to the database name.
now_str = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
database_id = 'sampledb_%s' % now_str
> batch_import.main(instance_id, database_id)
batch_import_test.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
batch_import.py:74: in main
database.create()
venv/lib/python3.6/site-packages/google/cloud/spanner_v1/database.py:221: in create
metadata=metadata,
venv/lib/python3.6/site-packages/google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py:424: in create_database
request, retry=retry, timeout=timeout, metadata=metadata
venv/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:143: in __call__
return wrapped_func(*args, **kwargs)
venv/lib/python3.6/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
on_error=on_error,
venv/lib/python3.6/site-packages/google/api_core/retry.py:184: in retry_target
return target()
venv/lib/python3.6/site-packages/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
venv/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.ALREADY_EXISTS
details = "Database already exist...cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24","grpc_status":6}"
>
> ???
E google.api_core.exceptions.AlreadyExists: 409 Database already exists: projects/cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24
<string>:3: AlreadyExists
============================== 1 failed in 1.17s ===============================
Exited with code exit status 1
It might be relevant that currently the implementation doesn't wait for the future returned by the database creation, which this PR will fix. So potentially that might lead to another operation to retry the creation?
In the logs for the Cloud Spanner instance there is only a single error listed, for google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase.
state = <grpc._channel._RPCState object at 0x7fa8776fdd30>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fa877630788>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "Cannot create database projects/python-docs-samples-tests/instances/test-instance-ea2482b380/databases/test-db-652111f0ff from backup projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e because the backup is still being created. Please retry the operation once the pending backup is complete."
E debug_error_string = "{"created":"@1606386594.850508711","description":"Error received from peer ipv4:74.125.197.95:443","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Cannot create database projects/python-docs-samples-tests/instances/test-instance-ea2482b380/databases/test-db-652111f0ff from backup projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e because the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"
E >
backup_sample.py:69: in restore_database
operation = new_database.restore(backup)
../../google/cloud/spanner_v1/database.py:551: in restore
metadata=metadata,
../../google/cloud/spanner_admin_database_v1/services/database_admin/client.py:1835: in restore_database
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in call
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Cannot create dat...the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"
???
E google.api_core.exceptions.FailedPrecondition: 400 Cannot create database projects/python-docs-samples-tests/instances/test-instance-ea2482b380/databases/test-db-652111f0ff from backup projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e because the backup is still being created. Please retry the operation once the pending backup is complete.
Thanks for stopping by to let us know something could be better!
PLEASE READ: If you have a support contract with Google, please create an issue in the support console instead of filing on GitHub. This will ensure a timely response.
Please run down the following list and make sure you've tried the usual "quick fixes":
Given spanner_v1 VERSION1.11.0, I am obtaining a transaction from a PingingPool as per
# Create a session pool that'll periodically refresh every 3 minutes (arbitrary choice value).pool=spanner.PingingPool(size=10, default_timeout=5, ping_interval=180)
background_thread=threading.Thread(target=pool.ping, name='ping-pool')
background_thread.daemon=Truebackground_thread.start()
db=client_instance.database(database, pool=pool)
ifnotdb.exists():
raiseProgrammingError("database '%s' does not exist."%database)
sess=db.session()
...
# Then later obtaining a transaction and holding it for a long-ish timetxn=sess.transaction()
txn.begin()
# Do a bunch of operations with the operation
...
txn.commit()
and I can confirm that pool isn't being used concurrently, but I've seen a test failure with
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.ABORTED
details = "Transaction not found"
debug_error_string = "{"created":"@1580854844.873538358","description":"Error received from peer ipv4:172.217.204.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Transaction not found","grpc_status":10}"
and in full detail
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.ABORTED
details = "Transaction not found"
debug_error_string = "{"created":"@1580854844.873538358","description":"Error received from peer ipv4:172.217.204.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Transaction not found","grpc_status":10}">
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "runtests.py", line 507, in<module>
options.exclude_tags,
File "runtests.py", line 294, in django_tests
extra_tests=extra_tests,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 629, in run_tests
old_config = self.setup_databases(aliases=databases)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 554, in setup_databases
self.parallel, **kwargs
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/utils.py", line 174, in setup_databases
serialize=connection.settings_dict.get('TEST', {}).get('SERIALIZE', True),
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/django/creation.py", line 33, in create_test_db
super().create_test_db(*args, **kwargs)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/creation.py", line 72, in create_test_db
run_syncdb=True,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/__init__.py", line 148, in call_command
return command.execute(*args, **defaults)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/commands/migrate.py", line 257, in handle
self.verbosity, self.interactive, connection.alias, apps=post_migrate_apps, plan=plan,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/sql.py", line 51, in emit_post_migrate_signal
**kwargs
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/dispatch/dispatcher.py", line 175, in send
forreceiverin self._live_receivers(sender)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/dispatch/dispatcher.py", line 175, in<listcomp>forreceiverin self._live_receivers(sender)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/management/__init__.py", line 83, in create_permissions
Permission.objects.using(using).bulk_create(perms)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 468, in bulk_create
self._batched_insert(objs_with_pk, fields, batch_size, ignore_conflicts=ignore_conflicts)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 1211, in _batched_insert
self._insert(item, fields=fields, using=self.db, ignore_conflicts=ignore_conflicts)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 1186, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1368, in execute_sql
cursor.execute(sql, params)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 87, in execute
self.__handle_insert(self.__get_txn(), sql, args or None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 128, in __handle_insert
res = txn.execute_update(sql, params=params, param_types=param_types)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
metadata=metadata,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
returntarget()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.Aborted: 409 Transaction not found
Can you add timeout param and pass it to the API call so that I can control the deadline? We occasionally see DeadlineExceededError upon Backup.create() calls.
DEBUG:google.auth.transport.requests:Making request: POST https://oauth2.googleapis.com/token
DEBUG:urllib3.connectionpool:https://oauth2.googleapis.com:443 "POST /token HTTP/1.1" 400 None
ERROR:grpc._plugin_wrapping:AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x10d79d4a8>" raised exception!
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/grpc/_plugin_wrapping.py", line 79, in __call__
callback_state, callback))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 77, in __call__
callback(self._get_authorization_headers(context), None)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 65, in _get_authorization_headers
headers)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/credentials.py", line 122, in before_request
self.refresh(request)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/service_account.py", line 322, in refresh
request, self._token_uri, assertion)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/_client.py", line 145, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/_client.py", line 111, in _token_endpoint_request
_handle_error_response(response_body)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/_client.py", line 61, in _handle_error_response
error_details, response_body)
google.auth.exceptions.RefreshError: ('invalid_grant: Not a valid email or user ID.', '{\n "error": "invalid_grant",\n "error_description": "Not a valid email or user ID."\n}')
DEBUG:google.api_core.retry:Retrying due to 503 Getting metadata from plugin failed with error: ('invalid_grant: Not a valid email or user ID.', '{\n "error": "invalid_grant",\n "error_description": "Not a valid email or user ID."\n}'), sleeping 1.3s ...
This is an error that will never resolve. We should surface it to the user immediately.
Also: I have no idea why this is a 503 UNAVAILABLE. Why would it not be a 400 BAD REQUEST or 401 UNAUTHORIZED??
Coming here from a project that plans on adding Cloud Spanner as a backend for Django.
In AUTOCOMMIT=off mode, we need to hold a Transaction for perhaps an indefinitely long time.
Cloud Spanner will abort:
a) Transactions when not used for 10seconds or more -- we can periodically send a SELECT 1=1 to keep it active
b) Transactions even when refreshed, can and will abort. This is because Cloud Spanner has a high abort rate
Thus we need to retry Transactions!
Current retry
The current code for retrying in this repository is to just re-invoke the function that was passed into *.run_in_transaction afresh with a new Transaction per
However, the correct way to retry Transactions as @bvandiver explained to me
You are getting quite close to the implementation in the open source JDBC driver. Rather than re-inventing things, I would suggest following their implementation. Of note, your current replay mechanism can lead to wrong answers. Imagine the canonical "transfer balance" transaction which decrements the balance in acct A, then increases the balance in acct B. However, between abort and retry someone deletes acct A - resulting in money magically appearing in acct B and no error (the update silently fails to update any rows). The long and the short of it is that you need to hash the results of all queries + DML and confirm on your retry that they give the same answers. You need query too (think a query to check if there was sufficient balance in acct A).
a) For every result returned by an operation on a Transaction, compute the checksum and add it a FIFO stack
b) At the point that a prior Transaction fails, that's the bottom of our stack
c) When retrying the Transaction from the first statement, compare its checksum with the same ordinal number/index on the FIFO stack -- if any of them don't match, abort the Transaction as not retryable
The implementation of this feature when attempted outside of this package involves a whole lot of hacking since we need to consume the raw data sent to StreamedResultSets which requires then proto marshalling and wrapping StreamedResult -- quite non-ideal and will actually involve patches to python-spanner.
@bvandiver and I chatted again about this today and I also briefly raised this issue to @skuruppu this afternoon too.
"/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/commands.py", line 272, in build_extensions
"Failed `build_ext` step:\n{}".format(formatted_exception))
commands.CommandError: Failed `build_ext` step:
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 118, in _compile
extra_postargs)
File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn
_classic_spawn(self, command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/commands.py", line 267, in build_extensions
build_ext.build_ext.build_extensions(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 59, in _parallel_compile
_compile_single_file, objects)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 54, in _compile_single_file
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command 'gcc' failed with exit status 1
----------------------------------------
Command "/tmpfs/src/github/synthtool/env/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmpfs/tmp/pip-install-m7g1ywho/grpcio/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmpfs/tmp/pip-record-_f7n9x8q/install-record.txt --single-version-externally-managed --compile --install-headers /tmpfs/src/github/synthtool/env/include/site/python3.6/grpcio" failed with error code 1 in /tmpfs/tmp/pip-install-m7g1ywho/grpcio/
You are using pip version 18.1, however version 20.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/python-spanner/synth.py", line 86, in <module>
python.py_samples()
File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 132, in py_samples
sample_readme_metadata = _get_sample_readme_metadata(sample_project_dir)
File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 85, in _get_sample_readme_metadata
shell.run([sys.executable, "-m", "pip", "install", "-r", requirements])
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'pip', 'install', '-r', '/home/kbuilder/.cache/synthtool/python-spanner/samples/samples/requirements.txt']' returned non-zero exit status 1.
2020-12-03 06:26:25,656 autosynth [ERROR] > Synthesis failed
2020-12-03 06:26:25,656 autosynth [DEBUG] > Running: git reset --hard HEAD
HEAD is now at cf87cdf chore: release 2.1.0 (#173)
2020-12-03 06:26:25,675 autosynth [DEBUG] > Running: git checkout autosynth
Switched to branch 'autosynth'
2020-12-03 06:26:25,688 autosynth [DEBUG] > Running: git clean -fdx
Removing .pre-commit-config.yaml
Removing __pycache__/
Removing google/__pycache__/
Removing google/cloud/__pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 334, in _inner_main
commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 65, in synthesize_loop
has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest)
File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch
synthesizer.synthesize(synth_log_path, self.environ)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.
Google internal developers can see the full log here.
There is a major bug in the ping() function used by both of these pools. The function breaks out of the while True: loop when the pool is empty or a session does not need to be pinged yet. This means it is unsuitable for use as a background thread as we suggest because the loop is likely to end the first time it is run.
Additionally, TransactionPingingPool puts used sessions into a pending sessions queue so they can have transactions started on them. However the begin_pending_transactions() function that removes them only runs once when the pool is created and once when the pool is bound to a database. The condition for the function loops is: while not self._pending_session.empty():
which means if at any point there are no pending sessions, then any future pending sessions will not be refreshed. There is no documentation suggesting the a user needs to run this themselves which means this is another major bug.
While studying StreamedResultSet() class it came to my attention that it includes _counterattribute which is never actually used (not event in unit tests). As it's protected, it seems to be intended for object internal use, not for users (users probably can just take len() or count results by themselves if they need).
Cloud Spanner supports adding labels to resources such as instances which can be used for filtering. Currently, the Python library does not allow users to set or get the labels through the provided surface. Adding this support would allow instances created for running system tests to be labelled. This would allow instances from previous system test runs which were not deleted to be cleaned up as part of the testing setup.
[18:13:16][ERROR] Failed to get build config
com.google.devtools.kokoro.config.ConfigException: Couldn't find build configuration file docs-presubmit.cfg or docs-presubmit.gcl under /tmp/workspace/workspace/cloud-devrel/client-libraries/python/googleapis/python-spanner/docs/docs-presubmit/src/github/python-spanner/.kokoro/docs.
at com.google.devtools.kokoro.config.BuildConfigReader.lambda$read$2(BuildConfigReader.java:54)
at java.util.Optional.orElseThrow(Optional.java:290)
at com.google.devtools.kokoro.config.BuildConfigReader.read(BuildConfigReader.java:51)
at com.google.devtools.kokoro.jenkins.plugin.kokorojob.store.NodeBuildConfigReader.invoke(NodeBuildConfigReader.java:39)
at com.google.devtools.kokoro.jenkins.plugin.kokorojob.store.NodeBuildConfigReader.invoke(NodeBuildConfigReader.java:13)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2731)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.jenkinsci.remoting.kokoro.RpcSlaveEngine$1$1.run(RpcSlaveEngine.java:107)
at java.lang.Thread.run(Thread.java:748)
at ......remote call to gcp_ubuntu-prod-yoshi-ubuntu-ir-819542672(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1537)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:822)
at hudson.FilePath.act(FilePath.java:985)
at hudson.FilePath.act(FilePath.java:974)
at com.google.devtools.kokoro.jenkins.plugin.kokorojob.store.ConfigStore.getKokoroBuildConfig(ConfigStore.java:102)
at com.google.devtools.kokoro.jenkins.plugin.pipeline.KokoroFlowExecution.getBuildConfig(KokoroFlowExecution.java:661)
at com.google.devtools.kokoro.jenkins.plugin.pipeline.KokoroFlowExecution.addPostScmSteps(KokoroFlowExecution.java:608)
at com.google.devtools.kokoro.jenkins.plugin.pipeline.KokoroScmStepContext.onSuccess(KokoroScmStepContext.java:25)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
because instance is the protobuf type google.cloud.spanner_admin_instance_v1.types.spanner_instance_admin.Instance instead of google.cloud.spanner_v1.instance.Instance. This is causing the following error in CI:
During handling of the above exception, another exception occurred:
def setUpModule():
if USE_EMULATOR:
from google.auth.credentials import AnonymousCredentials
emulator_project = os.getenv("GCLOUD_PROJECT", "emulator-test-project")
Config.CLIENT = Client(
project=emulator_project, credentials=AnonymousCredentials()
)
else:
Config.CLIENT = Client()
retry = RetryErrors(exceptions.ServiceUnavailable)
configs = list(retry(Config.CLIENT.list_instance_configs)())
instances = retry(_list_instances)()
EXISTING_INSTANCES[:] = instances
# Delete test instances that are older than an hour.
cutoff = int(time.time()) - 1 * 60 * 60
for instance in Config.CLIENT.list_instances("labels.python-spanner-systests:true"):
if "created" not in instance.labels:
continue
create_time = int(instance.labels["created"])
if create_time > cutoff:
continue
# Instance cannot be deleted while backups exist.
> for backup in instance.list_backups():
tests/system/test_system.py:125:
This is an experience report coming from a use case for which this API client had never been considered for. I am working on the spanner-django ORM plugin. My use case requires me to hold a Transaction alive for a few seconds as it is used in various functions for a while.
Problem statement
The design of this API client assumes that folks with all invoke database.run_in_transaction to run a bunch of code within one function, and database.run_in_transaction will handle retries, context checkouts and the session that'll create the Transaction.
In #10 (comment), @larkee pointed out to me that my usage of spanner_v1.Database.session() doesn't create a session from the pool that I might have provided! That came as a huge surprise to me and could explain a bunch of random errors I was getting from Spanner's server with NOT FOUND Session.
To even make this work, I had to fumble and read through the implementation details and access private methods i.e.
global_session_pool=spanner.pool.BurstyPool()
defconnect(...):
# Correctly retrieve a session from the global session pool.# See:# * https://github.com/orijtech/django-spanner/issues/291# * https://github.com/googleapis/python-spanner/issues/10#issuecomment-585056760## Adapted from:# https://bit.ly/3c8MK6p: python-spanner, Git hash 997a03477b07ec39c7184# google/cloud/spanner_v1/pool.py#L514-L535# TODO: File a bug to googleapis/python-spanner asking for a convenience# method since invoke database.session() gives the wrong result# yet requires a context manager wrapped with SessionCheckout# and needs accessing private methods, which leaks the details of the# implementation in order to try to use it correctly.pool=db._poolsession_checkout=spanner.pool.SessionCheckout(pool)
session=session_checkout.__enter__()
ifnotsession.exists():
session.create()
return_session=lambda: session_checkout.__exit__() # noqareturnConnection(db, session, return_session)
Suggestion
The presence of spanner_v1.Database.session() as a public method that totally by-passes the use of the pool that the user passed in basically, is a surprise and easily creates misuse that's very subtle to miss.
I think we can make this a whole lot easier to use without misuse by perhaps
where checkout_session() will handle the logic of the SessionCheckout
and then finally deprecate spanner_v1.Database.session() which oddly requires the caller to then check if the session exists first and at the end also invoke session.delete()
The suggestion above will remove all that cognitive load.
We have a query returning 80.000 rows with 71 fields on the select list using Python3, Google Cloud Spanner API 1.17 (we tried 1.18, 1.19, and 2.x). I choose version 1.17 because the performance decreases with newer versions of API.
The query returns the data up to 0.9ms when I start to copy the rows from StreamedResultset iterator to a list.
I started to isolate the code, and I'm using an empty "for loop" to simulate the problem and discard any other application performance problem.
Code Snippet:
import threading
import time
import os
from google.cloud.spanner import Client, PingingPool
from google.cloud.spanner_v1 import instance, database
"""Queries sample data from the database using SQL."""
projectId = 'cerc2-datalake-int-01'
instanceId = 'datalake-int-spanner-01'
database_id = 'cerc_datalake_bk_int14'
spanner_client = Client(projectId)
instance = spanner_client.instance(instanceId)
database = instance.database(database_id)
# Read query from query.sql file.
file = open('query.sql',mode='r')
content = file.read()
file.close()
# Executing the query
start = time.time()
print('Starting query')
with database.snapshot() as snapshot:
results = snapshot.execute_sql(
content
)
end = time.time()
print(f'Query execution time: {end - start}')
# Count returned rows ( USING VERSION 2.x: Error in this piece of code, some timestamp conversion error into google-core-api - removed all datetime fields to test)
start = time.time()
i = 0
for row in results:
i+=1
end = time.time()
print(f'Iterate itens {end - start}')
Using cProfile libraries to profile the application, I realized that the call of method _parse_value_pb inside _merge_values into streamed.py is the slowest method in my application.
For testing purposes, I removed the streamed.py file line 106:
The first scenario (with _parse_value_pb):
Query 0.9ms, resultset iteration: 25 seconds;
The second scenario, removing _parse_value_pb:
Query 0.9ms, resultset iteration: 6 seconds;
All these tests is running on my 2.4Ghz laptop, but when we use Appengine, this routine gets more than 120 seconds;
I've tested with google spanner library 2.1 version, I got 103 seconds instead of 25 seconds ( version 1.17), with the same behavior;
The performance is fair when the resultset has many rows and a few columns in each row. I tested up to 30 resultset columns, and this behavior is not a problem. In my case, I need to work with 71 columns in each row.
Today we have a default time out on query operations (ExecuteSql, ExecuteStreamingSql, Read, StreamingRead) of 60 seconds. We have instances of users authoring queries that take longer than this and being surprised by a 504 DEADLINE EXCEEDED. I propose:
improving the error we throw when the deadline is caused client, not server side, to point the user to look into timeout and retry configuration.
Consider adding an intermediate timeout setting for these methods. Today we have Default (60s) and long_running (1 hour). A setting that was something between, like 5 minutes, could alleviate a higher % of users finding this limit in the first place.
The OpenTelemetry assertions in the system tests fail if the transaction is aborted and retried. This is extremely brittle so the assertions need to be updated to account for retrying aborted transactions.
Example of test failure:
________ TestSessionAPI.test_transaction_read_and_insert_then_rollback _________
self = <tests.system.test_system.TestSessionAPI testMethod=test_transaction_read_and_insert_then_rollback>
@RetryErrors(exception=exceptions.ServerError)
@RetryErrors(exception=exceptions.Aborted)
def test_transaction_read_and_insert_then_rollback(self):
retry = RetryInstanceState(_has_all_ddl)
retry(self._db.reload)()
session = self._db.session()
session.create()
self.to_delete.append(session)
with self._db.batch() as batch:
batch.delete(self.TABLE, self.ALL)
transaction = session.transaction()
transaction.begin()
rows = list(transaction.read(self.TABLE, self.COLUMNS, self.ALL))
self.assertEqual(rows, [])
transaction.insert(self.TABLE, self.COLUMNS, self.ROW_DATA)
# Inserted rows can't be read until after commit.
rows = list(transaction.read(self.TABLE, self.COLUMNS, self.ALL))
self.assertEqual(rows, [])
transaction.rollback()
rows = list(session.read(self.TABLE, self.COLUMNS, self.ALL))
self.assertEqual(rows, [])
if HAS_OPENTELEMETRY_INSTALLED:
span_list = self.memory_exporter.get_finished_spans()
> self.assertEqual(len(span_list), 8)
E AssertionError: 14 != 8
tests/system/test_system.py:1026: AssertionError
----------------------------- Captured stdout call -----------------------------
409 Transaction was aborted., Trying again in 1 seconds...
------------------------------ Captured log call -------------------------------
WARNING opentelemetry.trace:__init__.py:468 Overriding current TracerProvider
WARNING opentelemetry.trace:__init__.py:468 Overriding current TracerProvider
Is your feature request related to a problem? Please describe.
I'd like to be able to run a query against a Spanner database and download (possibly large-ish -- MBs to GBs) results to a pandas DataFrame. Specifically, I'd like to eventually use this as a component in an ibis connector, but it'd also be useful for general data processing pipelines.
It's possible this is simpler than realized, so maybe could just be a code sample.
If there were a SQLAlchemy connector (a much bigger project than read-only pandas dataframe), then pandas support is basically free via pandas.read_sql.
This release has caused the docs generation to fail due to issues in CHANGELOG.md. The root cause in the CHANGELOG should be found and fixed so the library can continue to rely on the most recent update.
If this proves difficult, the version can be temporarily pinned to 2.2.4 in the interim.
Previously, options was used as a keyword argument in the google.api_core.grpc_helpers.create_channel call so I think it has just omitted. This is the only reference to grpc config I could find so this probably means the grpc channel is not being configured properly.
state = <grpc._channel._RPCState object at 0x7f7a01d97400>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f7a000de7c8>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "Cannot create database projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/databases/test-db-177a26949c from backup projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8 because the backup is still being created. Please retry the operation once the pending backup is complete."
E debug_error_string = "{"created":"@1603445340.328777425","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Cannot create database projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/databases/test-db-177a26949c from backup projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8 because the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"
E >
backup_sample.py:69: in restore_database
operation = new_database.restore(backup)
../../google/cloud/spanner_v1/database.py:543: in restore
self._instance.name, self.database_id, backup=source.name, metadata=metadata
../../google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py:675: in restore_database
request, retry=retry, timeout=timeout, metadata=metadata
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in call
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
on_error=on_error,
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:184: in retry_target
return target()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Cannot create dat...the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"
???
E google.api_core.exceptions.FailedPrecondition: 400 Cannot create database projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/databases/test-db-177a26949c from backup projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8 because the backup is still being created. Please retry the operation once the pending backup is complete.
If I have a table that already exists and try to create the same table, this package errors but its error code is None yet its message is Duplicate name in schema: .
Code: None gRPC_StatusCode: None Message: Duplicate name in schema: foo.
Traceback (most recent call last):
File "duplicate_table_v1.py", line 18, in<module>main()
File "duplicate_table_v1.py", line 12, in main
raise e
File "duplicate_table_v1.py", line 8, in main
result = lro.result()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/future/polling.py", line 127, in result
raise self._exception
google.api_core.exceptions.GoogleAPICallError: None Duplicate name in schema: foo.
This bug presents an inconsistency in the error handling because we get an error with a None status code and None gRPC status code, yet it has a message.
Comparison with Go
I can confirm that Cloud Spanner actually sends back the status code because the Go result actually has the status code, with this reproduction and by investigation the responses sent by the Spanner server
The default session pool BurstyPool does not create any sessions when bound and only creates them on demand. This means the first call made with this database will be slow. A session should be created when binding the pool to the database.
Is your feature request related to a problem? Please describe.
The current Google.cloud.spanner library has very limited functionalities compared to other db libraries (e.g., bigquery) and some of the key APIs are missing. For example,
get_table
table
list_tables
schema
query
execute
I would like to have the above functions added to the library.
Describe the solution you'd like
These functions could be implementable via the INFORMATION_SCHEMA query syntax.
This bug is related to the Spanner client library.
For long lived transactions (>= 30 minutes), in the case of large PDML changes, it is possible that the gRPC connection is terminated with an error "Received unexpected EOS on DATA frame from server".
In this case, we need to retry the transaction either with the received resume token obtained on reading the stream or from scratch. This will ensure that the PDML transaction continues to execute until it is successful or a hard timeout is reached.
We have already implemented such change in the Java client library, for more information see this PR: googleapis/java-spanner#360.
In order to test the fix, we can use a large spanner database. Please speak to @thiagotnunes for more details.
A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. In that case, the commit will fail with error ABORTED.
If this behavior is undesirable, periodically executing a simple SQL query in the transaction (e.g., SELECT 1) prevents the transaction from becoming idle.
and for my purposes I need a transaction that'll potentially be held for long by a Python DBAPI v2 Cursor for an arbitrary period, so definitely it needs a refresh every 9 seconds to send SELECT 1=1 I have prototyped a Transaction at https://gist.github.com/odeke-em/a17aa49854aeae1d83ffc14715f52d79
In the midst of concurrency and usage with other threads, this becomes ridiculous to deal with because at times both might want to use any of the Transaction methods so the re-entrant locking being used with "shared memory".
However, this is so much work to use this library ontop of other errors that I feel like the barrier for entry could be reduced perhaps by an option implemented by this library so that I just have to do
txn=sess.transaction(auto_refresh=True)
or better yet every Transaction should be able to auto-refresh
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
self = <google.api_core.operation.Operation object at 0x7f7a01e52550>
retry = <google.api_core.retry.Retry object at 0x7f7a02170e48>
def _done_or_raise(self, retry=DEFAULT_RETRY):
"""Check if the future is done and raise if it's not."""
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
if not self.done(**kwargs):
raise _OperationNotComplete()
E google.api_core.future.polling._OperationNotComplete
The above exception was the direct cause of the following exception:
self = <google.api_core.operation.Operation object at 0x7f7a01e52550>
timeout = 1200, retry = <google.api_core.retry.Retry object at 0x7f7a02170e48>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7a01e52550>>)
predicate = <function if_exception_type..if_exception_type_predicate at 0x7f7a0216b8c8>
sleep_generator = <generator object exponential_sleep_generator at 0x7f7a01e4e150>
deadline = 1200, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
return target()
# pylint: disable=broad-except
# This function explicitly must deal with broad exceptions.
except Exception as exc:
if not predicate(exc):
raise
last_exc = exc
if on_error is not None:
on_error(exc)
now = datetime_helpers.utcnow()
if deadline_datetime is not None:
if deadline_datetime <= now:
six.raise_from(
exceptions.RetryError(
"Deadline of {:.1f}s exceeded while calling {}".format(
deadline, target
),
last_exc,
),
value = None, from_value = _OperationNotComplete()
???
E google.api_core.exceptions.RetryError: Deadline of 1200.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7a01e52550>>), last exception:
:3: RetryError
During handling of the above exception, another exception occurred:
capsys = <_pytest.capture.CaptureFixture object at 0x7f7a01e3def0>
database = <google.cloud.spanner_v1.database.Database object at 0x7f7a01d2c400>
backup_sample.py:41: in create_backup
operation.result(1200)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:129: in result
self._blocking_poll(timeout=timeout, **kwargs)
self = <google.api_core.operation.Operation object at 0x7f7a01e52550>
timeout = 1200, retry = <google.api_core.retry.Retry object at 0x7f7a02170e48>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
retry_(self._done_or_raise)(**kwargs)
except exceptions.RetryError:
raise concurrent.futures.TimeoutError(
"Operation did not complete within the designated " "timeout."
)
E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout.
With spanner_v1 version 1.11, I am using this package without any concurrency and yet I am getting back an obscure error that looks to me like an issue with the underlying gRPC library or the coordination in this library
google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4
and that finally results in #10 even after I've removed PingingPool to use the default pool.
Please find the full stack trace below in the details element
======================================================================
ERROR: test_xview_class (admin_docs.test_middleware.XViewMiddlewareTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Previously received a different request with this seqno. seqno=4"
debug_error_string = "{"created":"@1580855981.982223904","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Previously received a different request with this seqno. seqno=4","grpc_status":3}">
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 89, in execute
self.__handle_update(self.__get_txn(), sql, args or None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 101, in __handle_update
res = txn.execute_update(sql, params=params, param_types=get_param_types(params))
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
metadata=metadata,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
returntarget()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
raise ProgrammingError(e.details if hasattr(e, 'details') else e)
spanner.dbapi.exceptions.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 267, in __call__
self._pre_setup()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 938, in _pre_setup
self._fixture_setup()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 1165, in _fixture_setup
self.setUpTestData()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/tests/admin_docs/tests.py", line 11, in setUpTestData
cls.superuser = User.objects.create_superuser(username='super', password='secret', email='[email protected]')
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 162, in create_superuser
return self._create_user(username, email, password, **extra_fields)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 145, in _create_user
user.save(using=self._db)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/base_user.py", line 66, in save
super().save(*args, **kwargs)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 741, in save
force_update=force_update, update_fields=update_fields)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 779, in save_base
force_update, using, update_fields,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 851, in _save_table
forced_update)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 900, in _do_update
return filtered._update(values) > 0
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 760, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1462, in execute_sql
cursor = super().execute_sql(result_type)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1133, in execute_sql
cursor.execute(sql, params)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_xview_func (admin_docs.test_middleware.XViewMiddlewareTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Previously received a different request with this seqno. seqno=4"
debug_error_string = "{"created":"@1580855981.991133481","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Previously received a different request with this seqno. seqno=4","grpc_status":3}">
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 89, in execute
self.__handle_update(self.__get_txn(), sql, args or None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 101, in __handle_update
res = txn.execute_update(sql, params=params, param_types=get_param_types(params))
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
metadata=metadata,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
returntarget()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
raise ProgrammingError(e.details if hasattr(e, 'details') else e)
spanner.dbapi.exceptions.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 267, in __call__
self._pre_setup()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 938, in _pre_setup
self._fixture_setup()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 1165, in _fixture_setup
self.setUpTestData()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/tests/admin_docs/tests.py", line 11, in setUpTestData
cls.superuser = User.objects.create_superuser(username='super', password='secret', email='[email protected]')
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 162, in create_superuser
return self._create_user(username, email, password, **extra_fields)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 145, in _create_user
user.save(using=self._db)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/base_user.py", line 66, in save
super().save(*args, **kwargs)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 741, in save
force_update=force_update, update_fields=update_fields)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 779, in save_base
force_update, using, update_fields,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 851, in _save_table
forced_update)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 900, in _do_update
return filtered._update(values) > 0
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 760, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1462, in execute_sql
cursor = super().execute_sql(result_type)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1133, in execute_sql
cursor.execute(sql, params)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_bookmarklets (admin_docs.test_views.AdminDocViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Previously received a different request with this seqno. seqno=4"
debug_error_string = "{"created":"@1580855982.000827256","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Previously received a different request with this seqno. seqno=4","grpc_status":3}">
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 89, in execute
self.__handle_update(self.__get_txn(), sql, args or None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 101, in __handle_update
res = txn.execute_update(sql, params=params, param_types=get_param_types(params))
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
metadata=metadata,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
returntarget()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
raise ProgrammingError(e.details if hasattr(e, 'details') else e)
spanner.dbapi.exceptions.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 267, in __call__
self._pre_setup()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 938, in _pre_setup
self._fixture_setup()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 1165, in _fixture_setup
self.setUpTestData()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/tests/admin_docs/tests.py", line 11, in setUpTestData
cls.superuser = User.objects.create_superuser(username='super', password='secret', email='[email protected]')
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 162, in create_superuser
return self._create_user(username, email, password, **extra_fields)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 145, in _create_user
user.save(using=self._db)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/base_user.py", line 66, in save
super().save(*args, **kwargs)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 741, in save
force_update=force_update, update_fields=update_fields)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 779, in save_base
force_update, using, update_fields,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 851, in _save_table
forced_update)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 900, in _do_update
return filtered._update(values) > 0
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 760, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1462, in execute_sql
cursor = super().execute_sql(result_type)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1133, in execute_sql
cursor.execute(sql, params)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_index (admin_docs.test_views.AdminDocViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
...
raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_missing_docutils (admin_docs.test_views.AdminDocViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
...
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
...
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_model_with_many_to_one (admin_docs.test_views.TestModelDetailView)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
...
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_model_with_no_backward_relations_render_only_relevant_fields (admin_docs.test_views.TestModelDetailView)
----------------------------------------------------------------------
...<SAME CONTENT>
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
----------------------------------------------------------------------
Ran 13 tests in 0.927s
FAILED (errors=45)
Testing against Django installed in'/home/travis/build/orijtech/spanner-orm/django_tests/django/django' with up to 2 processes
Importing application admin_default_site
Importing application admin_docs
Skipping setup of unused database(s): other.
Operations to perform:
Synchronize unmigrated apps: admin_default_site, admin_docs, auth, contenttypes, messages, sessions, staticfiles
Apply all migrations: admin, sites
Synchronizing apps without migrations:
Creating tables...
Creating table django_content_type
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table django_session
Creating table admin_docs_company
Creating table admin_docs_group
Creating table admin_docs_family
Creating table admin_docs_person
Running deferred SQL...
Running migrations:
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying sites.0001_initial... OK
Applying sites.0002_alter_domain_unique... OK
System check identified no issues (0 silenced).
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.ABORTED
details = "Transaction not found"
debug_error_string = "{"created":"@1580855984.140864504","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Transaction not found","grpc_status":10}">
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "runtests.py", line 507, in<module>
options.exclude_tags,
File "runtests.py", line 294, in django_tests
extra_tests=extra_tests,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 639, in run_tests
self.teardown_databases(old_config)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 583, in teardown_databases
keepdb=self.keepdb,
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/utils.py", line 299, in teardown_databases
connection.creation.destroy_test_db(old_name, verbosity, keepdb)
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/creation.py", line 241, in destroy_test_db
self.connection.close()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/base.py", line 288, in close
self._close()
File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/base.py", line 250, in _close
returnself.connection.close()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/connection.py", line 30, in close
self.commit()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/connection.py", line 60, in commit
res = self.__txn.commit()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 127, in commit
metadata=metadata,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 1556, in commit
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
returntarget()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.Aborted: 409 Transaction not found
The command"bash django_test_suite.sh" exited with 1.
cache.2
store build cache