GithubHelp home page GithubHelp logo

python-spanner's Introduction

Python Client for Cloud Spanner

GA pypi versions

Cloud Spanner is the world's first fully managed relational database service to offer both strong consistency and horizontal scalability for mission-critical online transaction processing (OLTP) applications. With Cloud Spanner you enjoy all the traditional benefits of a relational database; but unlike any other relational database service, Cloud Spanner scales horizontally to hundreds or thousands of servers to handle the biggest transactional workloads.

Quick Start

In order to use this library, you first need to go through the following steps:

  1. Select or create a Cloud Platform project.
  2. Enable billing for your project.
  3. Enable the Google Cloud Spanner API.
  4. Setup Authentication.

Installation

Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it's possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.

Supported Python Versions

Python >= 3.7

Deprecated Python Versions

Python == 2.7. Python == 3.5. Python == 3.6.

Mac/Linux

pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-cloud-spanner

Windows

pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install google-cloud-spanner

Example Usage

Executing Arbitrary SQL in a Transaction

Generally, to work with Cloud Spanner, you will want a transaction. The preferred mechanism for this is to create a single function, which executes as a callback to database.run_in_transaction:

# First, define the function that represents a single "unit of work"
# that should be run within the transaction.
def update_anniversary(transaction, person_id, unix_timestamp):
    # The query itself is just a string.
    #
    # The use of @parameters is recommended rather than doing your
    # own string interpolation; this provides protections against
    # SQL injection attacks.
    query = """SELECT anniversary FROM people
        WHERE id = @person_id"""

    # When executing the SQL statement, the query and parameters are sent
    # as separate arguments. When using parameters, you must specify
    # both the parameters themselves and their types.
    row = transaction.execute_sql(
        query=query,
        params={'person_id': person_id},
        param_types={
            'person_id': types.INT64_PARAM_TYPE,
        },
    ).one()

    # Now perform an update on the data.
    old_anniversary = row[0]
    new_anniversary = _compute_anniversary(old_anniversary, years)
    transaction.update(
        'people',
        ['person_id', 'anniversary'],
        [person_id, new_anniversary],
    )

# Actually run the `update_anniversary` function in a transaction.
database.run_in_transaction(update_anniversary,
    person_id=42,
    unix_timestamp=1335020400,
)

Select records using a Transaction

Once you have a transaction object (such as the first argument sent to run_in_transaction), reading data is easy:

# Define a SELECT query.
query = """SELECT e.first_name, e.last_name, p.telephone
    FROM employees as e, phones as p
    WHERE p.employee_id == e.employee_id"""

# Execute the query and return results.
result = transaction.execute_sql(query)
for row in result.rows:
    print(row)

Insert records using Data Manipulation Language (DML) with a Transaction

Use the execute_update() method to execute a DML statement:

spanner_client = spanner.Client()
instance = spanner_client.instance(instance_id)
database = instance.database(database_id)

def insert_singers(transaction):
    row_ct = transaction.execute_update(
        "INSERT Singers (SingerId, FirstName, LastName) "
        " VALUES (10, 'Virginia', 'Watson')"
    )

    print("{} record(s) inserted.".format(row_ct))

database.run_in_transaction(insert_singers)

Insert records using Mutations with a Transaction

To add one or more records to a table, use insert:

transaction.insert(
    'citizens',
    columns=['email', 'first_name', 'last_name', 'age'],
    values=[
        ['[email protected]', 'Phred', 'Phlyntstone', 32],
        ['[email protected]', 'Bharney', 'Rhubble', 31],
    ],
)

Update records using Data Manipulation Language (DML) with a Transaction

spanner_client = spanner.Client()
instance = spanner_client.instance(instance_id)
database = instance.database(database_id)

def update_albums(transaction):
    row_ct = transaction.execute_update(
        "UPDATE Albums "
        "SET MarketingBudget = MarketingBudget * 2 "
        "WHERE SingerId = 1 and AlbumId = 1"
    )

    print("{} record(s) updated.".format(row_ct))

database.run_in_transaction(update_albums)

Update records using Mutations with a Transaction

Transaction.update updates one or more existing records in a table. Fails if any of the records does not already exist.

transaction.update(
    'citizens',
    columns=['email', 'age'],
    values=[
        ['[email protected]', 33],
        ['[email protected]', 32],
    ],
)

Connection API

Connection API represents a wrap-around for Python Spanner API, written in accordance with PEP-249, and provides a simple way of communication with a Spanner database through connection objects:

from google.cloud.spanner_dbapi.connection import connect

connection = connect("instance-id", "database-id")
connection.autocommit = True

cursor = connection.cursor()
cursor.execute("SELECT * FROM table_name")

result = cursor.fetchall()

Aborted Transactions Retry Mechanism

In !autocommit mode, transactions can be aborted due to transient errors. In most cases retry of an aborted transaction solves the problem. To simplify it, connection tracks SQL statements, executed in the current transaction. In case the transaction aborted, the connection initiates a new one and re-executes all the statements. In the process, the connection checks that retried statements are returning the same results that the original statements did. If results are different, the transaction is dropped, as the underlying data changed, and auto retry is impossible.

Auto-retry of aborted transactions is enabled only for !autocommit mode, as in autocommit mode transactions are never aborted.

Next Steps

python-spanner's People

Contributors

ankiaga avatar ansh0l avatar asthamohta avatar busunkim96 avatar c24t avatar c2nes avatar chemelnucfin avatar crwilcox avatar dandhlee avatar daspecster avatar dhermes avatar gcf-owl-bot[bot] avatar haihuang-google avatar harshachinta avatar hemangchothani avatar ilyafaer avatar larkee avatar lukesneeringer avatar nginsberg-google avatar parthea avatar rahul2393 avatar rajatbhatta avatar release-please[bot] avatar renovate-bot avatar skuruppu avatar surbhigarg92 avatar tseaver avatar vi3k6i5 avatar yoshi-automation avatar zoercai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-spanner's Issues

samples.samples.backup_sample_test: test_list_backups failed

This test failed!

To configure my behavior, see the Build Cop Bot documentation.

If I'm commenting on this issue too often, add the buildcop: quiet label and
I will stop commenting.


commit: 39ea948
buildURL: Build Status, Sponge
status: failed

Test output
capsys = <_pytest.capture.CaptureFixture object at 0x7f7a01e597f0>
spanner_instance = 
def test_list_backups(capsys, spanner_instance):
    backup_sample.list_backups(INSTANCE_ID, DATABASE_ID, BACKUP_ID)
    out, _ = capsys.readouterr()
    id_count = out.count(BACKUP_ID)
  assert id_count == 7

E assert 6 == 7

backup_sample_test.py:99: AssertionError

Backup tests are flaky

The Kokoro tests are often failing on backup tests for unrelated changes. There are two problems:

  • UpdateBackup sometimes times out with DEADLINE_EXCEEDED as seen in GoogleCloudPlatform/python-docs-samples#3241
  • test_list_backups is failing due to a backup from a different test being included in the returned list for size_bytes

The first problem can be resolved by increasing the timeout for UpdateBackup.

The second problem is difficult to replicate and the exact cause is unclear given that the tests are not run in parallel and the backups are being deleted at the end of each test. The simplest solution will be to modify the test to ensure that no backups from previous tests meet the condition.

samples.samples.backup_sample_test: test_list_backup_operations failed

This test failed!

To configure my behavior, see the Build Cop Bot documentation.

If I'm commenting on this issue too often, add the buildcop: quiet label and
I will stop commenting.


commit: 7549383
buildURL: Build Status, Sponge
status: failed

Test output
args = (parent: "projects/python-docs-samples-tests/instances/test-instance-e13b1361c5"
filter: "(metadata.database:test-db-6168dc91d9) AND (metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata)"
,)
kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/python-docs-samples-tests/instances/test-instance-e13b1361c5'...361c5'), ('x-goog-api-client', 'gl-python/3.6.10 grpc/1.32.0 gax/1.22.4 gapic/1.19.0 gccl/1.19.0')], 'timeout': 3599.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
    try:
      return callable_(*args, **kwargs)

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:57:


self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fa829802cc0>
request = parent: "projects/python-docs-samples-tests/instances/test-instance-e13b1361c5"
filter: "(metadata.database:test-db-6168dc91d9) AND (metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata)"

timeout = 3599.0
metadata = [('google-cloud-resource-prefix', 'projects/python-docs-samples-tests/instances/test-instance-e13b1361c5'), ('x-goog-r.../test-instance-e13b1361c5'), ('x-goog-api-client', 'gl-python/3.6.10 grpc/1.32.0 gax/1.22.4 gapic/1.19.0 gccl/1.19.0')]
credentials = None, wait_for_ready = None, compression = None

def __call__(self,
             request,
             timeout=None,
             metadata=None,
             credentials=None,
             wait_for_ready=None,
             compression=None):
    state, call, = self._blocking(request, timeout, metadata, credentials,
                                  wait_for_ready, compression)
  return _end_unary_response_blocking(state, call, False, None)

.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:826:


state = <grpc._channel._RPCState object at 0x7fa82980acf8>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fa829a5a508>
with_call = False, deadline = None

def _end_unary_response_blocking(state, call, with_call, deadline):
    if state.code is grpc.StatusCode.OK:
        if with_call:
            rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
            return state.response, rendezvous
        else:
            return state.response
    else:
      raise _InactiveRpcError(state)

E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.INVALID_ARGUMENT
E details = "Invalid ListBackupOperations request."
E debug_error_string = "{"created":"@1602062355.673804195","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Invalid ListBackupOperations request.","grpc_status":3}"
E >

.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:729: _InactiveRpcError

The above exception was the direct cause of the following exception:

capsys = <_pytest.capture.CaptureFixture object at 0x7fa829792940>
spanner_instance = <google.cloud.spanner_v1.instance.Instance object at 0x7fa82b3ba710>

def test_list_backup_operations(capsys, spanner_instance):
  backup_sample.list_backup_operations(INSTANCE_ID, DATABASE_ID)

backup_sample_test.py:80:


backup_sample.py:133: in list_backup_operations
for op in operations:
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/page_iterator.py:212: in _items_iter
for page in self._page_iter(increment=False):
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/page_iterator.py:243: in _page_iter
page = self._next_page()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/page_iterator.py:534: in _next_page
response = self._method(self._request)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in call
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
on_error=on_error,
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:184: in retry_target
return target()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)


value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid ListBackupOp...c/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Invalid ListBackupOperations request.","grpc_status":3}"

???
E google.api_core.exceptions.InvalidArgument: 400 Invalid ListBackupOperations request.

:3: InvalidArgument

Incorrect issue links in CHANGELOG.md in last release notes

By chance I've caught my eye on broken issue URLs in CHANGELOG.md (only in 1.14.0 release). For example:
image

This one leads to python-spanner/issues/10183, but in fact original issue is google-cloud-python/pull/10183

@larkee, PTAL. It's not a problem to fix couple of links, but I assume it was done by release tool or some kind of a script, so it can repeat in future

spanner: unconsumed/uniterated StreamedResultSet doesn't send request to Cloud Spanner server

This is an issue that has plagued me for a while but I just got the time to make a repro.

Basically, if I try for example to invoke Transaction.execute_sql and do NOT consume the result e.g.

txn.execute_sql('DELETE from T1 WHERE 1=1')

instead of

res = txn.execute_sql('DELETE from T1 WHERE 1=1')
_ = list(res)

then the table will NOT be purged.

Seems like a bug to me with the underlying gRPC library, but it would be useful to explicitly document/call-out this bug if we don't have the bandwidth to fix it, to avoid unexpected problems for customers. It definitely sunk some hours for me in the past and also just right now.

spanner: consider exporting Transaction._rolled_back as Transaction.rolled_back

I am currently dealing with a situation where a Transaction might have been rolled back but the exception wasn't directly passed back to me as per

======================================================================
ERROR: test_concurrent_delete_with_save (basic.tests.ConcurrentSaveTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_channel.py", line 565, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
	status = StatusCode.FAILED_PRECONDITION
	details = "Cannot start a read or query within a transaction after Commit() or Rollback() has been called."
	debug_error_string = "{"created":"@1580864794.999511000","description":"Error received from peer ipv6:[2607:f8b0:4007:803::200a]:443","file":"src/core/lib/surface/call.cc","file_line":1046,"grpc_message":"Cannot start a read or query within a transaction after Commit() or Rollback() has been called.","grpc_status":9}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/spanner/dbapi/cursor.py", line 89, in execute
    self.__handle_insert(self.__get_txn(), sql, args or None)
  File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/spanner/dbapi/cursor.py", line 139, in __handle_insert
    param_types=param_types,
  File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/spanner/dbapi/cursor.py", line 356, in handle_txn_exec_with_retry
    return txn_method(*args, **kwargs)
  File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
    metadata=metadata,
  File "/Users/emmanuelodeke/Library/Python/3.7/lib/python/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/retry.py", line 277, in retry_wrapped_func
    on_error=on_error,
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/retry.py", line 182, in retry_target
    return target()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.FailedPrecondition: 400 Cannot start a read or query within a transaction after Commit() or Rollback() has been called.

and I see that we exported the attribute committed as per

but unfortunately not _rolled_back


which I need in order to ensure that I can correctly run commit or rollback on my transactions without a failed precondition error.

Cloud Spanner: creation of new database spuriously returns ALREADY_EXISTS

OS type and version: standard CircleCI docker image circleci/python:3.6.1, running on Linux 558edcd72a3d 4.15.0-1052-aws googleapis/google-cloud-python#54-Ubuntu SMP Tue Oct 1 15:43:26 UTC 2019 x86_64 Linux.

Python version: 3.6.1

Using google-cloud-spanner library 1.13.0.

This sampledb integration test creates a new database, with a name including the current time down to second resolution.

The test is not invoked in parallel, so this database creation should never fail due to an already existing database of the same name. However, this error did occur, as the log below shows -- maybe that's a bug in the retry implementation in the library?

#!/bin/bash -eo pipefail
. venv/bin/activate
pytest
============================= test session starts ==============================
platform linux -- Python 3.6.1, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
rootdir: /home/circleci/repo
collected 1 item                                                               

batch_import_test.py F                                                   [100%]

=================================== FAILURES ===================================
______________________________ test_batch_import _______________________________

args = (parent: "projects/cloudspannerecosystem/instances/***************************"
create_statement: "CREATE DATABASE `sa...ore, url)"
extra_statements: "\n\nCREATE INDEX StoriesByTitleTimeScore ON stories(title) STORING (time_ts, score)\n"
,)
kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/cloudspannerecosystem/instances/***************************/d...ion-test'), ('x-goog-api-client', 'gl-python/3.6.1 grpc/1.26.0 gax/1.15.0 gapic/1.13.0 gccl/1.13.0')], 'timeout': 60.0}

    @six.wraps(callable_)
    def error_remapped_callable(*args, **kwargs):
        try:
>           return callable_(*args, **kwargs)

venv/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:57: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fbb3527c4a8>
request = parent: "projects/cloudspannerecosystem/instances/***************************"
create_statement: "CREATE DATABASE `sam...score, url)"
extra_statements: "\n\nCREATE INDEX StoriesByTitleTimeScore ON stories(title) STORING (time_ts, score)\n"

timeout = 60.0
metadata = [('google-cloud-resource-prefix', 'projects/cloudspannerecosystem/instances/***************************/databases/samp...ontinuous-integration-test'), ('x-goog-api-client', 'gl-python/3.6.1 grpc/1.26.0 gax/1.15.0 gapic/1.13.0 gccl/1.13.0')]
credentials = None, wait_for_ready = None, compression = None

    def __call__(self,
                 request,
                 timeout=None,
                 metadata=None,
                 credentials=None,
                 wait_for_ready=None,
                 compression=None):
        state, call, = self._blocking(request, timeout, metadata, credentials,
                                      wait_for_ready, compression)
>       return _end_unary_response_blocking(state, call, False, None)

venv/lib/python3.6/site-packages/grpc/_channel.py:824: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

state = <grpc._channel._RPCState object at 0x7fbb352144a8>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fbb35210088>
with_call = False, deadline = None

    def _end_unary_response_blocking(state, call, with_call, deadline):
        if state.code is grpc.StatusCode.OK:
            if with_call:
                rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
                return state.response, rendezvous
            else:
                return state.response
        else:
>           raise _InactiveRpcError(state)
E           grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E           	status = StatusCode.ALREADY_EXISTS
E           	details = "Database already exists: projects/cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24"
E           	debug_error_string = "{"created":"@1579392565.335114093","description":"Error received from peer ipv4:172.217.13.74:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Database already exists: projects/cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24","grpc_status":6}"
E           >

venv/lib/python3.6/site-packages/grpc/_channel.py:726: _InactiveRpcError

The above exception was the direct cause of the following exception:

    def test_batch_import():
      instance_id = os.environ['SPANNER_INSTANCE']
    
      # Append the current timestamp to the database name.
      now_str = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
      database_id = 'sampledb_%s' % now_str
>     batch_import.main(instance_id, database_id)

batch_import_test.py:29: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
batch_import.py:74: in main
    database.create()
venv/lib/python3.6/site-packages/google/cloud/spanner_v1/database.py:221: in create
    metadata=metadata,
venv/lib/python3.6/site-packages/google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py:424: in create_database
    request, retry=retry, timeout=timeout, metadata=metadata
venv/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:143: in __call__
    return wrapped_func(*args, **kwargs)
venv/lib/python3.6/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
    on_error=on_error,
venv/lib/python3.6/site-packages/google/api_core/retry.py:184: in retry_target
    return target()
venv/lib/python3.6/site-packages/google/api_core/timeout.py:214: in func_with_timeout
    return func(*args, **kwargs)
venv/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

value = None
from_value = <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.ALREADY_EXISTS
	details = "Database already exist...cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24","grpc_status":6}"
>

>   ???
E   google.api_core.exceptions.AlreadyExists: 409 Database already exists: projects/cloudspannerecosystem/instances/***************************/databases/sampledb_2020-01-19_00-09-24

<string>:3: AlreadyExists
============================== 1 failed in 1.17s ===============================

Exited with code exit status 1

It might be relevant that currently the implementation doesn't wait for the future returned by the database creation, which this PR will fix. So potentially that might lead to another operation to retry the creation?

In the logs for the Cloud Spanner instance there is only a single error listed, for google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase.

samples.samples.backup_sample_test: test_list_backups failed

Note: #159 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.


commit: 6053f4a
buildURL: Build Status, Sponge
status: failed

Test output
capsys = <_pytest.capture.CaptureFixture object at 0x7fa87753eeb8>
spanner_instance = 
def test_list_backups(capsys, spanner_instance):
    backup_sample.list_backups(INSTANCE_ID, DATABASE_ID, BACKUP_ID)
    out, _ = capsys.readouterr()
    id_count = out.count(BACKUP_ID)
  assert id_count == 7

E assert 6 == 7

backup_sample_test.py:99: AssertionError

samples.samples.backup_sample_test: test_restore_database failed

Note: #158 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.


commit: 6053f4a
buildURL: Build Status, Sponge
status: failed

Test output
args = (parent: "projects/python-docs-samples-tests/instances/test-instance-ea2482b380"
database_id: "test-db-652111f0ff"
backup: "projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e"
,)
kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/python-docs-samples-tests/instances/test-instance-ea2482b380/...sts/instances/test-instance-ea2482b380'), ('x-goog-api-client', 'gl-python/3.6.10 grpc/1.33.2 gax/1.23.0 gccl/2.0.0')]}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
    try:
      return callable_(*args, **kwargs)

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:57:


self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fa8776fdc18>
request = parent: "projects/python-docs-samples-tests/instances/test-instance-ea2482b380"
database_id: "test-db-652111f0ff"
backup: "projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e"

timeout = None
metadata = [('google-cloud-resource-prefix', 'projects/python-docs-samples-tests/instances/test-instance-ea2482b380/databases/tes...ests/instances/test-instance-ea2482b380'), ('x-goog-api-client', 'gl-python/3.6.10 grpc/1.33.2 gax/1.23.0 gccl/2.0.0')]
credentials = None, wait_for_ready = None, compression = None

def __call__(self,
             request,
             timeout=None,
             metadata=None,
             credentials=None,
             wait_for_ready=None,
             compression=None):
    state, call, = self._blocking(request, timeout, metadata, credentials,
                                  wait_for_ready, compression)
  return _end_unary_response_blocking(state, call, False, None)

.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:923:


state = <grpc._channel._RPCState object at 0x7fa8776fdd30>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fa877630788>
with_call = False, deadline = None

def _end_unary_response_blocking(state, call, with_call, deadline):
    if state.code is grpc.StatusCode.OK:
        if with_call:
            rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
            return state.response, rendezvous
        else:
            return state.response
    else:
      raise _InactiveRpcError(state)

E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "Cannot create database projects/python-docs-samples-tests/instances/test-instance-ea2482b380/databases/test-db-652111f0ff from backup projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e because the backup is still being created. Please retry the operation once the pending backup is complete."
E debug_error_string = "{"created":"@1606386594.850508711","description":"Error received from peer ipv4:74.125.197.95:443","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Cannot create database projects/python-docs-samples-tests/instances/test-instance-ea2482b380/databases/test-db-652111f0ff from backup projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e because the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"
E >

.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:826: _InactiveRpcError

The above exception was the direct cause of the following exception:

capsys = <_pytest.capture.CaptureFixture object at 0x7fa8776180f0>

@RetryErrors(exception=DeadlineExceeded, max_tries=2)
def test_restore_database(capsys):
  backup_sample.restore_database(INSTANCE_ID, RESTORE_DB_ID, BACKUP_ID)

backup_sample_test.py:75:


backup_sample.py:69: in restore_database
operation = new_database.restore(backup)
../../google/cloud/spanner_v1/database.py:551: in restore
metadata=metadata,
../../google/cloud/spanner_admin_database_v1/services/database_admin/client.py:1835: in restore_database
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in call
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)


value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Cannot create dat...the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"

???
E google.api_core.exceptions.FailedPrecondition: 400 Cannot create database projects/python-docs-samples-tests/instances/test-instance-ea2482b380/databases/test-db-652111f0ff from backup projects/python-docs-samples-tests/instances/test-instance-ea2482b380/backups/test-backup-c150364a3e because the backup is still being created. Please retry the operation once the pending backup is complete.

:3: FailedPrecondition

spanner.Client().list_instances method filter_ parameter not working

Thanks for stopping by to let us know something could be better!

PLEASE READ: If you have a support contract with Google, please create an issue in the support console instead of filing on GitHub. This will ensure a timely response.

Please run down the following list and make sure you've tried the usual "quick fixes":

If you are still having issues, please be sure to include as much information as possible:

Environment details

  • OS type and version: Alpine, MacOS,...
  • Python version: 3.8.2
  • pip version: 20.1.1
  • google-cloud-spanner version: Version: 1.17.1

Steps to reproduce

  1. Initialize a client
  2. Run client.list_instances(filter_="name:something")

Code example

# example

from google.cloud import spanner

c = spanner.Client(project="my-project")
print([ i.instsance_id for i in c.list_instances(filter_="name:something") ])
print([ i.instsance_id for i in c.list_instances(filter_="labels.env:temp") ])

Stack trace

# example

The code doesn't throw exceptions or errors, but it gives the list of all instances ignoring the filter_ parameter.

Making sure to follow these steps will guarantee the quickest resolution possible.

Thanks!

spanner: occasional "google.api_core.exceptions.Aborted: 409 Transaction not found" error with PingingPool

Given spanner_v1 VERSION1.11.0, I am obtaining a transaction from a PingingPool as per

    # Create a session pool that'll periodically refresh every 3 minutes (arbitrary choice value).
    pool = spanner.PingingPool(size=10, default_timeout=5, ping_interval=180)
    background_thread = threading.Thread(target=pool.ping, name='ping-pool')
    background_thread.daemon = True
    background_thread.start()

    db = client_instance.database(database, pool=pool)
    if not db.exists():
        raise ProgrammingError("database '%s' does not exist." % database)

    sess = db.session()
    ...
    # Then later obtaining a transaction and holding it for a long-ish time
    txn = sess.transaction()
    txn.begin()
    # Do a bunch of operations with the operation
    ...
    txn.commit()

and I can confirm that pool isn't being used concurrently, but I've seen a test failure with

  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.ABORTED
	details = "Transaction not found"
	debug_error_string = "{"created":"@1580854844.873538358","description":"Error received from peer ipv4:172.217.204.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Transaction not found","grpc_status":10}"

and in full detail

Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.ABORTED
	details = "Transaction not found"
	debug_error_string = "{"created":"@1580854844.873538358","description":"Error received from peer ipv4:172.217.204.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Transaction not found","grpc_status":10}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "runtests.py", line 507, in <module>
    options.exclude_tags,
  File "runtests.py", line 294, in django_tests
    extra_tests=extra_tests,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 629, in run_tests
    old_config = self.setup_databases(aliases=databases)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 554, in setup_databases
    self.parallel, **kwargs
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/utils.py", line 174, in setup_databases
    serialize=connection.settings_dict.get('TEST', {}).get('SERIALIZE', True),
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/django/creation.py", line 33, in create_test_db
    super().create_test_db(*args, **kwargs)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/creation.py", line 72, in create_test_db
    run_syncdb=True,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/__init__.py", line 148, in call_command
    return command.execute(*args, **defaults)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/base.py", line 364, in execute
    output = self.handle(*args, **options)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/base.py", line 83, in wrapped
    res = handle_func(*args, **kwargs)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/commands/migrate.py", line 257, in handle
    self.verbosity, self.interactive, connection.alias, apps=post_migrate_apps, plan=plan,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/core/management/sql.py", line 51, in emit_post_migrate_signal
    **kwargs
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/dispatch/dispatcher.py", line 175, in send
    for receiver in self._live_receivers(sender)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/dispatch/dispatcher.py", line 175, in <listcomp>
    for receiver in self._live_receivers(sender)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/management/__init__.py", line 83, in create_permissions
    Permission.objects.using(using).bulk_create(perms)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 468, in bulk_create
    self._batched_insert(objs_with_pk, fields, batch_size, ignore_conflicts=ignore_conflicts)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 1211, in _batched_insert
    self._insert(item, fields=fields, using=self.db, ignore_conflicts=ignore_conflicts)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 1186, in _insert
    return query.get_compiler(using=using).execute_sql(return_id)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1368, in execute_sql
    cursor.execute(sql, params)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 87, in execute
    self.__handle_insert(self.__get_txn(), sql, args or None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 128, in __handle_insert
    res = txn.execute_update(sql, params=params, param_types=param_types)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
    metadata=metadata,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
    on_error=on_error,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
    return target()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.Aborted: 409 Transaction not found

[Spanner] authentication issues should result in a user error

  1. Copy paste code from https://cloud.google.com/spanner/docs/getting-started/python/
  2. Run

... it hangs forever

  1. Waste Spend hour struggling to figure out what is happening.

  2. Figure out how to turn on http logging with,

    import logging
    
    logging.basicConfig(level=logging.DEBUG)
    
  3. See:

    DEBUG:google.auth.transport.requests:Making request: POST https://oauth2.googleapis.com/token
    DEBUG:urllib3.connectionpool:https://oauth2.googleapis.com:443 "POST /token HTTP/1.1" 400 None
    ERROR:grpc._plugin_wrapping:AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x10d79d4a8>" raised exception!
    Traceback (most recent call last):
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/grpc/_plugin_wrapping.py", line 79, in __call__
        callback_state, callback))
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 77, in __call__
        callback(self._get_authorization_headers(context), None)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 65, in _get_authorization_headers
        headers)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/credentials.py", line 122, in before_request
        self.refresh(request)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/service_account.py", line 322, in refresh
        request, self._token_uri, assertion)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/_client.py", line 145, in jwt_grant
        response_data = _token_endpoint_request(request, token_uri, body)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/_client.py", line 111, in _token_endpoint_request
        _handle_error_response(response_body)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/_client.py", line 61, in _handle_error_response
        error_details, response_body)
    google.auth.exceptions.RefreshError: ('invalid_grant: Not a valid email or user ID.', '{\n  "error": "invalid_grant",\n  "error_description": "Not a valid email or user ID."\n}')
    DEBUG:google.api_core.retry:Retrying due to 503 Getting metadata from plugin failed with error: ('invalid_grant: Not a valid email or user ID.', '{\n  "error": "invalid_grant",\n  "error_description": "Not a valid email or user ID."\n}'), sleeping 1.3s ...
    

This is an error that will never resolve. We should surface it to the user immediately.

Also: I have no idea why this is a 503 UNAVAILABLE. Why would it not be a 400 BAD REQUEST or 401 UNAUTHORIZED??

Transaction: on retry, replays should compare checksums of prior numbered statements that succeeded

Coming here from a project that plans on adding Cloud Spanner as a backend for Django.

In AUTOCOMMIT=off mode, we need to hold a Transaction for perhaps an indefinitely long time.
Cloud Spanner will abort:
a) Transactions when not used for 10seconds or more -- we can periodically send a SELECT 1=1 to keep it active
b) Transactions even when refreshed, can and will abort. This is because Cloud Spanner has a high abort rate

Thus we need to retry Transactions!

Current retry

The current code for retrying in this repository is to just re-invoke the function that was passed into *.run_in_transaction afresh with a new Transaction per

while True:
if self._transaction is None:
txn = self.transaction()
else:
txn = self._transaction
if txn._transaction_id is None:
txn.begin()
try:
attempts += 1
return_value = func(txn, *args, **kw)
except Aborted as exc:
del self._transaction
_delay_until_retry(exc, deadline, attempts)
continue
except GoogleAPICallError:
del self._transaction
raise
except Exception:
txn.rollback()
raise
try:
txn.commit()
except Aborted as exc:
del self._transaction
_delay_until_retry(exc, deadline, attempts)
except GoogleAPICallError:
del self._transaction
raise
else:
return return_value

Recommended retry

However, the correct way to retry Transactions as @bvandiver explained to me

You are getting quite close to the implementation in the open source JDBC driver. Rather than re-inventing things, I would suggest following their implementation. Of note, your current replay mechanism can lead to wrong answers. Imagine the canonical "transfer balance" transaction which decrements the balance in acct A, then increases the balance in acct B. However, between abort and retry someone deletes acct A - resulting in money magically appearing in acct B and no error (the update silently fails to update any rows). The long and the short of it is that you need to hash the results of all queries + DML and confirm on your retry that they give the same answers. You need query too (think a query to check if there was sufficient balance in acct A).

a) For every result returned by an operation on a Transaction, compute the checksum and add it a FIFO stack
b) At the point that a prior Transaction fails, that's the bottom of our stack
c) When retrying the Transaction from the first statement, compare its checksum with the same ordinal number/index on the FIFO stack -- if any of them don't match, abort the Transaction as not retryable

This is what the Java spanner-jdbc implementation does

Suggestion

The implementation of this feature when attempted outside of this package involves a whole lot of hacking since we need to consume the raw data sent to StreamedResultSets which requires then proto marshalling and wrapping StreamedResult -- quite non-ideal and will actually involve patches to python-spanner.

@bvandiver and I chatted again about this today and I also briefly raised this issue to @skuruppu this afternoon too.

Synthesis failed for python-spanner

Hello! Autosynth couldn't regenerate python-spanner. ๐Ÿ’”

Here's the output from running synth.py:

 "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup
        return distutils.core.setup(**attrs)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/core.py", line 148, in setup
        dist.run_commands()
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 955, in run_commands
        self.run_command(cmd)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
        cmd_obj.run()
      File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
        return orig.install.run(self)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/install.py", line 545, in run
        self.run_command('build')
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
        cmd_obj.run()
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build.py", line 135, in run
        self.run_command(cmd_name)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
        cmd_obj.run()
      File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run
        _build_ext.run(self)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 339, in run
        self.build_extensions()
      File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/commands.py", line 272, in build_extensions
        "Failed `build_ext` step:\n{}".format(formatted_exception))
    commands.CommandError: Failed `build_ext` step:
    Traceback (most recent call last):
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 118, in _compile
        extra_postargs)
      File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn
        _classic_spawn(self, command)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/ccompiler.py", line 909, in spawn
        spawn(cmd, dry_run=self.dry_run)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 36, in spawn
        _spawn_posix(cmd, search_path, dry_run=dry_run)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 159, in _spawn_posix
        % (cmd, exit_status))
    distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/commands.py", line 267, in build_extensions
        build_ext.build_ext.build_extensions(self)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
        self._build_extensions_serial()
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
        self.build_extension(ext)
      File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
        _build_ext.build_extension(self, ext)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
        depends=ext.depends)
      File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 59, in _parallel_compile
        _compile_single_file, objects)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 266, in map
        return self._map_async(func, iterable, mapstar, chunksize).get()
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 644, in get
        raise self._value
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 119, in worker
        result = (True, func(*args, **kwds))
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
        return list(map(*args))
      File "/tmpfs/tmp/pip-install-m7g1ywho/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 54, in _compile_single_file
        self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 120, in _compile
        raise CompileError(msg)
    distutils.errors.CompileError: command 'gcc' failed with exit status 1
    
    
    ----------------------------------------
Command "/tmpfs/src/github/synthtool/env/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmpfs/tmp/pip-install-m7g1ywho/grpcio/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmpfs/tmp/pip-record-_f7n9x8q/install-record.txt --single-version-externally-managed --compile --install-headers /tmpfs/src/github/synthtool/env/include/site/python3.6/grpcio" failed with error code 1 in /tmpfs/tmp/pip-install-m7g1ywho/grpcio/
You are using pip version 18.1, however version 20.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
    main()
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
    spec.loader.exec_module(synth_module)  # type: ignore
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/kbuilder/.cache/synthtool/python-spanner/synth.py", line 86, in <module>
    python.py_samples()
  File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 132, in py_samples
    sample_readme_metadata = _get_sample_readme_metadata(sample_project_dir)
  File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 85, in _get_sample_readme_metadata
    shell.run([sys.executable, "-m", "pip", "install", "-r", requirements])
  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
    raise exc
  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
    encoding="utf-8",
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'pip', 'install', '-r', '/home/kbuilder/.cache/synthtool/python-spanner/samples/samples/requirements.txt']' returned non-zero exit status 1.
2020-12-03 06:26:25,656 autosynth [ERROR] > Synthesis failed
2020-12-03 06:26:25,656 autosynth [DEBUG] > Running: git reset --hard HEAD
HEAD is now at cf87cdf chore: release 2.1.0 (#173)
2020-12-03 06:26:25,675 autosynth [DEBUG] > Running: git checkout autosynth
Switched to branch 'autosynth'
2020-12-03 06:26:25,688 autosynth [DEBUG] > Running: git clean -fdx
Removing .pre-commit-config.yaml
Removing __pycache__/
Removing google/__pycache__/
Removing google/cloud/__pycache__/
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module>
    main()
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main
    return _inner_main(temp_dir)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 334, in _inner_main
    commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 65, in synthesize_loop
    has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest)
  File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch
    synthesizer.synthesize(synth_log_path, self.environ)
  File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
    synth_proc.check_returncode()  # Raise an exception.
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
    self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.

Google internal developers can see the full log here.

PingingPool and TransactionPingingPool implementation does not match documentation

There is a major bug in the ping() function used by both of these pools. The function breaks out of the while True: loop when the pool is empty or a session does not need to be pinged yet. This means it is unsuitable for use as a background thread as we suggest because the loop is likely to end the first time it is run.

Additionally, TransactionPingingPool puts used sessions into a pending sessions queue so they can have transactions started on them. However the begin_pending_transactions() function that removes them only runs once when the pool is created and once when the pool is bound to a database. The condition for the function loops is:
while not self._pending_session.empty():
which means if at any point there are no pending sessions, then any future pending sessions will not be refreshed. There is no documentation suggesting the a user needs to run this themselves which means this is another major bug.

refactor: unused attribute in StreamedResultSet() class

While studying StreamedResultSet() class it came to my attention that it includes _counter attribute which is never actually used (not event in unit tests). As it's protected, it seems to be intended for object internal use, not for users (users probably can just take len() or count results by themselves if they need).

Proposing to erase the property.

/cc @larkee for approval

Add support for instance labels

Cloud Spanner supports adding labels to resources such as instances which can be used for filtering. Currently, the Python library does not allow users to set or get the labels through the provided surface. Adding this support would allow instances created for running system tests to be labelled. This would allow instances from previous system test runs which were not deleted to be cleaned up as part of the testing setup.

Synthesis failed for python-spanner

Hello! Autosynth couldn't regenerate python-spanner. ๐Ÿ’”

Here's the output from running synth.py:

osted.org/packages/30/9e/f663a2aa66a09d838042ae1a2c5659828bb9b41ea3a6efa20a20fd92b121/Jinja2-2.11.2-py2.py3-none-any.whl
  Saved ./Jinja2-2.11.2-py2.py3-none-any.whl
Collecting MarkupSafe==1.1.1 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 5))
  Using cached https://files.pythonhosted.org/packages/b2/5f/23e0023be6bb885d00ffbefad2942bc51a620328ee910f64abe5a8d18dd1/MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl
  Saved ./MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl
Collecting protobuf==3.13.0 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 6))
  Using cached https://files.pythonhosted.org/packages/30/79/510974552cebff2ba04038544799450defe75e96ea5f1675dbf72cc8744f/protobuf-3.13.0-cp36-cp36m-manylinux1_x86_64.whl
  Saved ./protobuf-3.13.0-cp36-cp36m-manylinux1_x86_64.whl
Collecting pypandoc==1.5 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 7))
  Using cached https://files.pythonhosted.org/packages/d6/b7/5050dc1769c8a93d3ec7c4bd55be161991c94b8b235f88bf7c764449e708/pypandoc-1.5.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmpfs/tmp/tmptzo1tu2a/setuptools-tmp/setuptools/__init__.py", line 6, in <module>
        import distutils.core
      File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/_distutils_hack/__init__.py", line 82, in create_module
        return importlib.import_module('._distutils', 'setuptools')
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/importlib/__init__.py", line 126, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
    ModuleNotFoundError: No module named 'setuptools._distutils'
    
    ----------------------------------------
 (  Cache entry deserialization failed, entry ignored
Command "python setup.py egg_info" failed with error code 1 in /tmpfs/tmp/pip-build-g8skbd6y/pypandoc/
)
ERROR: no such package '@gapic_generator_python_pip_deps//': pip_import failed: Collecting click==7.1.2 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 1))
  Using cached https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/click-7.1.2-py2.py3-none-any.whl
  Saved ./click-7.1.2-py2.py3-none-any.whl
Collecting google-api-core==1.22.1 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/e0/2d/7c6c75013105e1d2b6eaa1bf18a56995be1dbc673c38885aea31136e9918/google_api_core-1.22.1-py2.py3-none-any.whl
  Saved ./google_api_core-1.22.1-py2.py3-none-any.whl
Collecting googleapis-common-protos==1.52.0 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 3))
  Using cached https://files.pythonhosted.org/packages/03/74/3956721ea1eb4bcf7502a311fdaa60b85bd751de4e57d1943afe9b334141/googleapis_common_protos-1.52.0-py2.py3-none-any.whl
  Saved ./googleapis_common_protos-1.52.0-py2.py3-none-any.whl
Collecting jinja2==2.11.2 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 4))
  Using cached https://files.pythonhosted.org/packages/30/9e/f663a2aa66a09d838042ae1a2c5659828bb9b41ea3a6efa20a20fd92b121/Jinja2-2.11.2-py2.py3-none-any.whl
  Saved ./Jinja2-2.11.2-py2.py3-none-any.whl
Collecting MarkupSafe==1.1.1 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 5))
  Using cached https://files.pythonhosted.org/packages/b2/5f/23e0023be6bb885d00ffbefad2942bc51a620328ee910f64abe5a8d18dd1/MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl
  Saved ./MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl
Collecting protobuf==3.13.0 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 6))
  Using cached https://files.pythonhosted.org/packages/30/79/510974552cebff2ba04038544799450defe75e96ea5f1675dbf72cc8744f/protobuf-3.13.0-cp36-cp36m-manylinux1_x86_64.whl
  Saved ./protobuf-3.13.0-cp36-cp36m-manylinux1_x86_64.whl
Collecting pypandoc==1.5 (from -r /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/gapic_generator_python/requirements.txt (line 7))
  Using cached https://files.pythonhosted.org/packages/d6/b7/5050dc1769c8a93d3ec7c4bd55be161991c94b8b235f88bf7c764449e708/pypandoc-1.5.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmpfs/tmp/tmptzo1tu2a/setuptools-tmp/setuptools/__init__.py", line 6, in <module>
        import distutils.core
      File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/_distutils_hack/__init__.py", line 82, in create_module
        return importlib.import_module('._distutils', 'setuptools')
      File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/importlib/__init__.py", line 126, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
    ModuleNotFoundError: No module named 'setuptools._distutils'
    
    ----------------------------------------
 (  Cache entry deserialization failed, entry ignored
Command "python setup.py egg_info" failed with error code 1 in /tmpfs/tmp/pip-build-g8skbd6y/pypandoc/
)
INFO: Elapsed time: 2.240s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)

Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
    main()
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
    spec.loader.exec_module(synth_module)  # type: ignore
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/kbuilder/.cache/synthtool/python-spanner/synth.py", line 30, in <module>
    include_protos=True,
  File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_bazel.py", line 46, in py_library
    return self._generate_code(service, version, "python", **kwargs)
  File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_bazel.py", line 183, in _generate_code
    shell.run(bazel_run_args)
  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
    raise exc
  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
    encoding="utf-8",
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['bazel', '--max_idle_secs=240', 'build', '//google/spanner/v1:spanner-v1-py']' returned non-zero exit status 1.
2020-08-31 05:24:46,361 autosynth [ERROR] > Synthesis failed
2020-08-31 05:24:46,361 autosynth [DEBUG] > Running: git reset --hard HEAD
HEAD is now at ca82c1f chore: release 1.18.0 (#119)
2020-08-31 05:24:46,367 autosynth [DEBUG] > Running: git checkout autosynth
Switched to branch 'autosynth'
2020-08-31 05:24:46,371 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Removing google/__pycache__/
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 690, in <module>
    main()
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 539, in main
    return _inner_main(temp_dir)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 670, in _inner_main
    commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 375, in synthesize_loop
    has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 273, in synthesize_version_in_new_branch
    synthesizer.synthesize(synth_log_path, self.environ)
  File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
    synth_proc.check_returncode()  # Raise an exception.
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
    self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.

Google internal developers can see the full log here.

docs-presubmit failing

[18:13:16][ERROR] Failed to get build config
com.google.devtools.kokoro.config.ConfigException: Couldn't find build configuration file docs-presubmit.cfg or docs-presubmit.gcl under /tmp/workspace/workspace/cloud-devrel/client-libraries/python/googleapis/python-spanner/docs/docs-presubmit/src/github/python-spanner/.kokoro/docs.
	at com.google.devtools.kokoro.config.BuildConfigReader.lambda$read$2(BuildConfigReader.java:54)
	at java.util.Optional.orElseThrow(Optional.java:290)
	at com.google.devtools.kokoro.config.BuildConfigReader.read(BuildConfigReader.java:51)
	at com.google.devtools.kokoro.jenkins.plugin.kokorojob.store.NodeBuildConfigReader.invoke(NodeBuildConfigReader.java:39)
	at com.google.devtools.kokoro.jenkins.plugin.kokorojob.store.NodeBuildConfigReader.invoke(NodeBuildConfigReader.java:13)
	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2731)
	at hudson.remoting.UserRequest.perform(UserRequest.java:153)
	at hudson.remoting.UserRequest.perform(UserRequest.java:50)
	at hudson.remoting.Request$2.run(Request.java:336)
	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at org.jenkinsci.remoting.kokoro.RpcSlaveEngine$1$1.run(RpcSlaveEngine.java:107)
	at java.lang.Thread.run(Thread.java:748)
	at ......remote call to gcp_ubuntu-prod-yoshi-ubuntu-ir-819542672(Native Method)
	at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1537)
	at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
	at hudson.remoting.Channel.call(Channel.java:822)
	at hudson.FilePath.act(FilePath.java:985)
	at hudson.FilePath.act(FilePath.java:974)
	at com.google.devtools.kokoro.jenkins.plugin.kokorojob.store.ConfigStore.getKokoroBuildConfig(ConfigStore.java:102)
	at com.google.devtools.kokoro.jenkins.plugin.pipeline.KokoroFlowExecution.getBuildConfig(KokoroFlowExecution.java:661)
	at com.google.devtools.kokoro.jenkins.plugin.pipeline.KokoroFlowExecution.addPostScmSteps(KokoroFlowExecution.java:608)
	at com.google.devtools.kokoro.jenkins.plugin.pipeline.KokoroScmStepContext.onSuccess(KokoroScmStepContext.java:25)
	at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

System test failures cleaning up backups

instance.list_backups fails at this line:

for backup in instance.list_backups():

because instance is the protobuf type google.cloud.spanner_admin_instance_v1.types.spanner_instance_admin.Instance instead of google.cloud.spanner_v1.instance.Instance. This is causing the following error in CI:

During handling of the above exception, another exception occurred:

    def setUpModule():
        if USE_EMULATOR:
            from google.auth.credentials import AnonymousCredentials

            emulator_project = os.getenv("GCLOUD_PROJECT", "emulator-test-project")
            Config.CLIENT = Client(
                project=emulator_project, credentials=AnonymousCredentials()
            )
        else:
            Config.CLIENT = Client()
        retry = RetryErrors(exceptions.ServiceUnavailable)

        configs = list(retry(Config.CLIENT.list_instance_configs)())

        instances = retry(_list_instances)()
        EXISTING_INSTANCES[:] = instances

        # Delete test instances that are older than an hour.
        cutoff = int(time.time()) - 1 * 60 * 60
        for instance in Config.CLIENT.list_instances("labels.python-spanner-systests:true"):
            if "created" not in instance.labels:
                continue
            create_time = int(instance.labels["created"])
            if create_time > cutoff:
                continue
            # Instance cannot be deleted while backups exist.
>           for backup in instance.list_backups():

tests/system/test_system.py:125:

See e.g. this failing test run.

It looks like it was introduced in #195, but the tests passed for that PR. The API changes to list_instances and list_backups that would have caused this happened back in #147 (see https://github.com/googleapis/python-spanner/pull/147/files#diff-f9e7537fc73135ee5d350541c1147e8ce8c71c505b01c9ea1187f9ee80540b19R328-R356), so I'm not sure why we're only seeing this issue now.

spanner: fix Database.session vs SessionCheckout context manager

This is an experience report coming from a use case for which this API client had never been considered for. I am working on the spanner-django ORM plugin. My use case requires me to hold a Transaction alive for a few seconds as it is used in various functions for a while.

Problem statement

The design of this API client assumes that folks with all invoke database.run_in_transaction to run a bunch of code within one function, and database.run_in_transaction will handle retries, context checkouts and the session that'll create the Transaction.

In #10 (comment), @larkee pointed out to me that my usage of spanner_v1.Database.session() doesn't create a session from the pool that I might have provided! That came as a huge surprise to me and could explain a bunch of random errors I was getting from Spanner's server with NOT FOUND Session.

To even make this work, I had to fumble and read through the implementation details and access private methods i.e.

global_session_pool = spanner.pool.BurstyPool()

def connect(...):
    # Correctly retrieve a session from the global session pool.
    # See:
    #   * https://github.com/orijtech/django-spanner/issues/291
    #   * https://github.com/googleapis/python-spanner/issues/10#issuecomment-585056760
    #
    # Adapted from:
    #   https://bit.ly/3c8MK6p: python-spanner, Git hash 997a03477b07ec39c7184
    #   google/cloud/spanner_v1/pool.py#L514-L535
    # TODO: File a bug to googleapis/python-spanner asking for a convenience
    # method since invoke database.session() gives the wrong result
    # yet requires a context manager wrapped with SessionCheckout
    # and needs accessing private methods, which leaks the details of the
    # implementation in order to try to use it correctly.
    pool = db._pool
    session_checkout = spanner.pool.SessionCheckout(pool)
    session = session_checkout.__enter__()
    if not session.exists():
        session.create()
    return_session = lambda: session_checkout.__exit__() # noqa

    return Connection(db, session, return_session)

Suggestion

The presence of spanner_v1.Database.session() as a public method that totally by-passes the use of the pool that the user passed in basically, is a surprise and easily creates misuse that's very subtle to miss.

I think we can make this a whole lot easier to use without misuse by perhaps

session = database.checkout_session()
...
database.return_session(session)

where checkout_session() will handle the logic of the SessionCheckout

and then finally deprecate spanner_v1.Database.session() which oddly requires the caller to then check if the session exists first and at the end also invoke session.delete()

The suggestion above will remove all that cognitive load.

Thank you!

Test code samples under CI

The documentation hosted on googleapis.dev is out of date and the code examples throw errors. These are contained here and need to be updated.

Unit test failing on master

There is a unit test failing on master:

___________________________ TestClient.test_constructor_implicit_credentials ____________________________

self = <tests.unit.test_client.TestClient testMethod=test_constructor_implicit_credentials>

    def test_constructor_implicit_credentials(self):
        creds = _make_credentials()
    
        patch = mock.patch("google.auth.default", return_value=(creds, None))
        with patch as default:
            self._constructor_test_helper(
                None, None, expected_creds=creds.with_scopes.return_value
            )
    
>       default.assert_called_once_with()

tests/unit/test_client.py:161: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/unit-2-7/lib/python2.7/site-packages/mock/mock.py:957: in assert_called_once_with
    return self.assert_called_with(*args, **kwargs)
.nox/unit-2-7/lib/python2.7/site-packages/mock/mock.py:944: in assert_called_with
    six.raise_from(AssertionError(_error_message(cause)), cause)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

value = AssertionError("expected call not found.\nExpected: default()\nActual: default(scopes=('https://www.googleapis.com/auth/spanner.admin',))",)
from_value = None

    def raise_from(value, from_value):
>       raise value
E       AssertionError: expected call not found.
E       Expected: default()
E       Actual: default(scopes=('https://www.googleapis.com/auth/spanner.admin',))

.nox/unit-2-7/lib/python2.7/site-packages/six.py:738: AssertionError

This appears to be due to googleapis/python-cloud-core#15

Slow _merge_values ( parsing values from protobuf to string)

self._current_row.append(_parse_value(value, field.type_))

We have a query returning 80.000 rows with 71 fields on the select list using Python3, Google Cloud Spanner API 1.17 (we tried 1.18, 1.19, and 2.x). I choose version 1.17 because the performance decreases with newer versions of API.

The query returns the data up to 0.9ms when I start to copy the rows from StreamedResultset iterator to a list.

I started to isolate the code, and I'm using an empty "for loop" to simulate the problem and discard any other application performance problem.

Code Snippet:

import threading
import time
import os
from google.cloud.spanner import Client, PingingPool
from google.cloud.spanner_v1 import instance, database

"""Queries sample data from the database using SQL."""

projectId = 'cerc2-datalake-int-01'
instanceId = 'datalake-int-spanner-01'
database_id = 'cerc_datalake_bk_int14'

spanner_client = Client(projectId)

instance = spanner_client.instance(instanceId)
database = instance.database(database_id)

# Read query from query.sql file.
file = open('query.sql',mode='r')
content  = file.read()
file.close()

# Executing the query
start = time.time()
print('Starting query')
with database.snapshot() as snapshot:
    results = snapshot.execute_sql(
        content 
    )
end = time.time()
print(f'Query execution time: {end - start}')

# Count returned rows ( USING VERSION 2.x:  Error in this piece of code, some timestamp conversion error into google-core-api - removed all datetime fields to test)
start = time.time()
i = 0    
for row in results:
    i+=1
end = time.time()
print(f'Iterate itens {end - start}')


Using cProfile libraries to profile the application, I realized that the call of method _parse_value_pb inside _merge_values into streamed.py is the slowest method in my application.

For testing purposes, I removed the streamed.py file line 106:

self._current_row.append(_parse_value_pb(value, field.type))

The line has been replaced by:

self._current_row.append(value, field.type)

The first scenario (with _parse_value_pb):
Query 0.9ms, resultset iteration: 25 seconds;

The second scenario, removing _parse_value_pb:
Query 0.9ms, resultset iteration: 6 seconds;

All these tests is running on my 2.4Ghz laptop, but when we use Appengine, this routine gets more than 120 seconds;

I've tested with google spanner library 2.1 version, I got 103 seconds instead of 25 seconds ( version 1.17), with the same behavior;

The performance is fair when the resultset has many rows and a few columns in each row. I tested up to 30 resultset columns, and this behavior is not a problem. In my case, I need to work with 71 columns in each row.

[Spanner] ExecuteSql times out at 60s, error message not helpful to realize why it timed out.

Today we have a default time out on query operations (ExecuteSql, ExecuteStreamingSql, Read, StreamingRead) of 60 seconds. We have instances of users authoring queries that take longer than this and being surprised by a 504 DEADLINE EXCEEDED. I propose:

  1. improving the error we throw when the deadline is caused client, not server side, to point the user to look into timeout and retry configuration.
  2. Consider adding an intermediate timeout setting for these methods. Today we have Default (60s) and long_running (1 hour). A setting that was something between, like 5 minutes, could alleviate a higher % of users finding this limit in the first place.

Deflake OpenTelemetry assertion in system tests

The OpenTelemetry assertions in the system tests fail if the transaction is aborted and retried. This is extremely brittle so the assertions need to be updated to account for retrying aborted transactions.

Example of test failure:

________ TestSessionAPI.test_transaction_read_and_insert_then_rollback _________

self = <tests.system.test_system.TestSessionAPI testMethod=test_transaction_read_and_insert_then_rollback>

    @RetryErrors(exception=exceptions.ServerError)
    @RetryErrors(exception=exceptions.Aborted)
    def test_transaction_read_and_insert_then_rollback(self):
        retry = RetryInstanceState(_has_all_ddl)
        retry(self._db.reload)()

        session = self._db.session()
        session.create()
        self.to_delete.append(session)

        with self._db.batch() as batch:
            batch.delete(self.TABLE, self.ALL)

        transaction = session.transaction()
        transaction.begin()

        rows = list(transaction.read(self.TABLE, self.COLUMNS, self.ALL))
        self.assertEqual(rows, [])

        transaction.insert(self.TABLE, self.COLUMNS, self.ROW_DATA)

        # Inserted rows can't be read until after commit.
        rows = list(transaction.read(self.TABLE, self.COLUMNS, self.ALL))
        self.assertEqual(rows, [])
        transaction.rollback()

        rows = list(session.read(self.TABLE, self.COLUMNS, self.ALL))
        self.assertEqual(rows, [])

        if HAS_OPENTELEMETRY_INSTALLED:
            span_list = self.memory_exporter.get_finished_spans()
>           self.assertEqual(len(span_list), 8)
E           AssertionError: 14 != 8

tests/system/test_system.py:1026: AssertionError
----------------------------- Captured stdout call -----------------------------
409 Transaction was aborted., Trying again in 1 seconds...
------------------------------ Captured log call -------------------------------
WARNING  opentelemetry.trace:__init__.py:468 Overriding current TracerProvider
WARNING  opentelemetry.trace:__init__.py:468 Overriding current TracerProvider

feature request: pandas connector

Is your feature request related to a problem? Please describe.

I'd like to be able to run a query against a Spanner database and download (possibly large-ish -- MBs to GBs) results to a pandas DataFrame. Specifically, I'd like to eventually use this as a component in an ibis connector, but it'd also be useful for general data processing pipelines.

Describe the solution you'd like

It seems that StreamedResultSet is the most natural place to put a to_dataframe method, similar to the RowIterator.to_dataframe method in the BigQuery client library.

Since pandas needn't be required to use this client library, the import should be conditional

https://github.com/googleapis/python-bigquery/blob/fb401bd94477323bba68cf252dd88166495daf54/google/cloud/bigquery/table.py#L29-L32

and the dependency listed in "extras".

https://github.com/googleapis/python-bigquery/blob/fb401bd94477323bba68cf252dd88166495daf54/setup.py#L50

Describe alternatives you've considered

It's possible this is simpler than realized, so maybe could just be a code sample.

If there were a SQLAlchemy connector (a much bigger project than read-only pandas dataframe), then pandas support is basically free via pandas.read_sql.

Additional context

Related StackOverflow questions:

Investigate new Sphinx release changes

Sphinx has a new release: 3.0.0

This release has caused the docs generation to fail due to issues in CHANGELOG.md. The root cause in the CHANGELOG should be found and fixed so the library can continue to rely on the most recent update.

If this proves difficult, the version can be temporarily pinned to 2.2.4 in the interim.

samples.samples.backup_sample_test: test_restore_database failed

This test failed!

To configure my behavior, see the Build Cop Bot documentation.

If I'm commenting on this issue too often, add the buildcop: quiet label and
I will stop commenting.


commit: 39ea948
buildURL: Build Status, Sponge
status: failed

Test output
args = (parent: "projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3"
database_id: "test-db-177a26949c"
backup: "projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8"
,)
kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/...37bf3'), ('x-goog-api-client', 'gl-python/3.6.10 grpc/1.33.1 gax/1.23.0 gapic/1.19.1 gccl/1.19.1')], 'timeout': 3599.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
    try:
      return callable_(*args, **kwargs)

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:57:


self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f7a01e59be0>
request = parent: "projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3"
database_id: "test-db-177a26949c"
backup: "projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8"

timeout = 3599.0
metadata = [('google-cloud-resource-prefix', 'projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/databases/tes.../test-instance-d26fd37bf3'), ('x-goog-api-client', 'gl-python/3.6.10 grpc/1.33.1 gax/1.23.0 gapic/1.19.1 gccl/1.19.1')]
credentials = None, wait_for_ready = None, compression = None

def __call__(self,
             request,
             timeout=None,
             metadata=None,
             credentials=None,
             wait_for_ready=None,
             compression=None):
    state, call, = self._blocking(request, timeout, metadata, credentials,
                                  wait_for_ready, compression)
  return _end_unary_response_blocking(state, call, False, None)

.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:826:


state = <grpc._channel._RPCState object at 0x7f7a01d97400>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f7a000de7c8>
with_call = False, deadline = None

def _end_unary_response_blocking(state, call, with_call, deadline):
    if state.code is grpc.StatusCode.OK:
        if with_call:
            rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
            return state.response, rendezvous
        else:
            return state.response
    else:
      raise _InactiveRpcError(state)

E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "Cannot create database projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/databases/test-db-177a26949c from backup projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8 because the backup is still being created. Please retry the operation once the pending backup is complete."
E debug_error_string = "{"created":"@1603445340.328777425","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Cannot create database projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/databases/test-db-177a26949c from backup projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8 because the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"
E >

.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:729: _InactiveRpcError

The above exception was the direct cause of the following exception:

capsys = <_pytest.capture.CaptureFixture object at 0x7f7a01da2ac8>

@RetryErrors(exception=DeadlineExceeded, max_tries=2)
def test_restore_database(capsys):
  backup_sample.restore_database(INSTANCE_ID, RESTORE_DB_ID, BACKUP_ID)

backup_sample_test.py:75:


backup_sample.py:69: in restore_database
operation = new_database.restore(backup)
../../google/cloud/spanner_v1/database.py:543: in restore
self._instance.name, self.database_id, backup=source.name, metadata=metadata
../../google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py:675: in restore_database
request, retry=retry, timeout=timeout, metadata=metadata
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in call
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
on_error=on_error,
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:184: in retry_target
return target()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)


value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Cannot create dat...the backup is still being created. Please retry the operation once the pending backup is complete.","grpc_status":9}"

???
E google.api_core.exceptions.FailedPrecondition: 400 Cannot create database projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/databases/test-db-177a26949c from backup projects/python-docs-samples-tests/instances/test-instance-d26fd37bf3/backups/test-backup-97f37b62a8 because the backup is still being created. Please retry the operation once the pending backup is complete.

:3: FailedPrecondition

DatabaseAdmin: on Duplicate name in Schema, gRPC FailedPrecondition status code is not handled and sent back as None

If I have a table that already exists and try to create the same table, this package errors but its error code is None yet its message is Duplicate name in schema: .

Reproduction

from google.cloud import spanner_v1 as spanner

def main():
    db = spanner.Client().instance('django-tests').database('db1')
    lro = db.update_ddl(['CREATE TABLE foo (id INT64) PRIMARY KEY(id)'])

    try:
        result = lro.result()
    except Exception as e:
        print('\033[31mCode: %s gRPC_StatusCode: %s Message: %s\033[00m' % 
                (e.code, e.grpc_status_code, e.message))
        raise e
    else:
        print(result)

if __name__ == '__main__':
    main()

which unfortunately prints out

Code: None gRPC_StatusCode: None Message: Duplicate name in schema: foo.
Traceback (most recent call last):
  File "duplicate_table_v1.py", line 18, in <module>
    main()
  File "duplicate_table_v1.py", line 12, in main
    raise e
  File "duplicate_table_v1.py", line 8, in main
    result = lro.result()
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/future/polling.py", line 127, in result
    raise self._exception
google.api_core.exceptions.GoogleAPICallError: None Duplicate name in schema: foo.

This bug presents an inconsistency in the error handling because we get an error with a None status code and None gRPC status code, yet it has a message.

Comparison with Go

I can confirm that Cloud Spanner actually sends back the status code because the Go result actually has the status code, with this reproduction and by investigation the responses sent by the Spanner server

package main

import (
	"context"

	dbadmin "cloud.google.com/go/spanner/admin/database/apiv1"
	dbspb "google.golang.org/genproto/googleapis/spanner/admin/database/v1"
)

func main() {
	ctx := context.Background()
	adminClient, err := dbadmin.NewDatabaseAdminClient(ctx)
	if err != nil {
		panic(err)
	}
	ddlReq := &dbspb.UpdateDatabaseDdlRequest{
		Database: "projects/orijtech-161805/instances/django-tests/databases/db1",
		Statements: []string{
			"CREATE TABLE foo (id INT64) PRIMARY KEY(id)",
		},
	}
	lro, err := adminClient.UpdateDatabaseDdl(ctx, ddlReq)
	if err != nil {
		panic(err)
	}
	if err := lro.Wait(ctx); err != nil {
		panic(err)
	}
}

and prints out

Sleeping for 947.779411ms
panic: rpc error: code = FailedPrecondition desc = Duplicate name in schema: foo.

goroutine 1 [running]:
main.main()
	/Users/emmanuelodeke/Desktop/spanner-orm-trials/duplicate_table.go:27 +0x1db
exit status 2

Postulation

I think that the result of waiting on the long running operation isn't being properly used to retrieve the status code.

/cc @larkee @skuruppu, and for an FYI @bvandiver @timgraham

Initialize a session for BurstyPool

The default session pool BurstyPool does not create any sessions when bound and only creates them on demand. This means the first call made with this database will be slow. A session should be created when binding the pool to the database.

FR: Adding additional functions in the API library

Is your feature request related to a problem? Please describe.

The current Google.cloud.spanner library has very limited functionalities compared to other db libraries (e.g., bigquery) and some of the key APIs are missing. For example,

get_table
table
list_tables
schema
query
execute

I would like to have the above functions added to the library.

Describe the solution you'd like

These functions could be implementable via the INFORMATION_SCHEMA query syntax.

spanner-client: Retry PDML on "Received unexpected EOS on DATA frame from server"

This bug is related to the Spanner client library.

For long lived transactions (>= 30 minutes), in the case of large PDML changes, it is possible that the gRPC connection is terminated with an error "Received unexpected EOS on DATA frame from server".

In this case, we need to retry the transaction either with the received resume token obtained on reading the stream or from scratch. This will ensure that the PDML transaction continues to execute until it is successful or a hard timeout is reached.

We have already implemented such change in the Java client library, for more information see this PR: googleapis/java-spanner#360.

In order to test the fix, we can use a large spanner database. Please speak to @thiagotnunes for more details.

Incorrect return type in update_database_ddl() docs

In update_database_ddl() method docs it's said:

Returns:
A :class:`~google.cloud.spanner_admin_database_v1.types._OperationFuture` instance.

But looks like in fact in returns google.api_core.operation.Operation:

ะ‘ะตะทั‹ะผัะฝะฝั‹ะน

It's documented correctly in method Database.update_ddl(), which calls update_database_ddl() and returns it's return value:

:rtype: :class:`google.api_core.operation.Operation`
:returns: an operation instance

It's the only place where update_database_ddl() is called, so I assume we can safely change return type in docs.

Unskip test_list_backup_operations

We will skip the list_backup_operations test due to consistent failures happening due to a production issue.

The tests will be unskipped once the issue is fixed in production.

spanner: consider implementing long-running aka auto-refreshing Transaction

I understand that Cloud Spanner transactions are meant to be used for a short while and committed or rolled back and referring to the authoritative advisory at https://cloud.google.com/spanner/docs/reference/rest/v1/TransactionOptions#idle-transactions
which says

A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. In that case, the commit will fail with error ABORTED.

If this behavior is undesirable, periodically executing a simple SQL query in the transaction (e.g., SELECT 1) prevents the transaction from becoming idle.

and for my purposes I need a transaction that'll potentially be held for long by a Python DBAPI v2 Cursor for an arbitrary period, so definitely it needs a refresh every 9 seconds to send SELECT 1=1 I have prototyped a Transaction at https://gist.github.com/odeke-em/a17aa49854aeae1d83ffc14715f52d79

In the midst of concurrency and usage with other threads, this becomes ridiculous to deal with because at times both might want to use any of the Transaction methods so the re-entrant locking being used with "shared memory".

However, this is so much work to use this library ontop of other errors that I feel like the barrier for entry could be reduced perhaps by an option implemented by this library so that I just have to do

txn = sess.transaction(auto_refresh=True)

or better yet every Transaction should be able to auto-refresh

samples.samples.backup_sample_test: test_create_backup failed

This test failed!

To configure my behavior, see the Build Cop Bot documentation.

If I'm commenting on this issue too often, add the buildcop: quiet label and
I will stop commenting.


commit: 39ea948
buildURL: Build Status, Sponge
status: failed

Test output
target = functools.partial(>)
predicate = .if_exception_type_predicate at 0x7f7a0216b8c8>
sleep_generator = 
deadline = 1200, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
    """Call a function and retry if it fails.

    This is the lowest-level retry helper. Generally, you'll use the
    higher-level retry helper :class:`Retry`.

    Args:
        target(Callable): The function to call and retry. This must be a
            nullary function - apply arguments with `functools.partial`.
        predicate (Callable[Exception]): A callable used to determine if an
            exception raised by the target should be considered retryable.
            It should return True to retry or False otherwise.
        sleep_generator (Iterable[float]): An infinite iterator that determines
            how long to sleep between retries.
        deadline (float): How long to keep retrying the target. The last sleep
            period is shortened as necessary, so that the last retry runs at
            ``deadline`` (and not considerably beyond it).
        on_error (Callable[Exception]): A function to call while processing a
            retryable exception.  Any error raised by this function will *not*
            be caught.

    Returns:
        Any: the return value of the target function.

    Raises:
        google.api_core.RetryError: If the deadline is exceeded while retrying.
        ValueError: If the sleep generator stops yielding values.
        Exception: If the target raises a method that isn't retryable.
    """
    if deadline is not None:
        deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
            seconds=deadline
        )
    else:
        deadline_datetime = None

    last_exc = None

    for sleep in sleep_generator:
        try:
          return target()

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:184:


self = <google.api_core.operation.Operation object at 0x7f7a01e52550>
retry = <google.api_core.retry.Retry object at 0x7f7a02170e48>

def _done_or_raise(self, retry=DEFAULT_RETRY):
    """Check if the future is done and raise if it's not."""
    kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}

    if not self.done(**kwargs):
      raise _OperationNotComplete()

E google.api_core.future.polling._OperationNotComplete

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:86: _OperationNotComplete

The above exception was the direct cause of the following exception:

self = <google.api_core.operation.Operation object at 0x7f7a01e52550>
timeout = 1200, retry = <google.api_core.retry.Retry object at 0x7f7a02170e48>

def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
    """Poll and wait for the Future to be resolved.

    Args:
        timeout (int):
            How long (in seconds) to wait for the operation to complete.
            If None, wait indefinitely.
    """
    if self._result_set:
        return

    retry_ = self._retry.with_deadline(timeout)

    try:
        kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
      retry_(self._done_or_raise)(**kwargs)

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:107:


args = (), kwargs = {}
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7a01e52550>>)
sleep_generator = <generator object exponential_sleep_generator at 0x7f7a01e4e150>

@general_helpers.wraps(func)
def retry_wrapped_func(*args, **kwargs):
    """A wrapper that calls target function with retry."""
    target = functools.partial(func, *args, **kwargs)
    sleep_generator = exponential_sleep_generator(
        self._initial, self._maximum, multiplier=self._multiplier
    )
    return retry_target(
        target,
        self._predicate,
        sleep_generator,
        self._deadline,
      on_error=on_error,
    )

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:286:


target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7a01e52550>>)
predicate = <function if_exception_type..if_exception_type_predicate at 0x7f7a0216b8c8>
sleep_generator = <generator object exponential_sleep_generator at 0x7f7a01e4e150>
deadline = 1200, on_error = None

def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
    """Call a function and retry if it fails.

    This is the lowest-level retry helper. Generally, you'll use the
    higher-level retry helper :class:`Retry`.

    Args:
        target(Callable): The function to call and retry. This must be a
            nullary function - apply arguments with `functools.partial`.
        predicate (Callable[Exception]): A callable used to determine if an
            exception raised by the target should be considered retryable.
            It should return True to retry or False otherwise.
        sleep_generator (Iterable[float]): An infinite iterator that determines
            how long to sleep between retries.
        deadline (float): How long to keep retrying the target. The last sleep
            period is shortened as necessary, so that the last retry runs at
            ``deadline`` (and not considerably beyond it).
        on_error (Callable[Exception]): A function to call while processing a
            retryable exception.  Any error raised by this function will *not*
            be caught.

    Returns:
        Any: the return value of the target function.

    Raises:
        google.api_core.RetryError: If the deadline is exceeded while retrying.
        ValueError: If the sleep generator stops yielding values.
        Exception: If the target raises a method that isn't retryable.
    """
    if deadline is not None:
        deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
            seconds=deadline
        )
    else:
        deadline_datetime = None

    last_exc = None

    for sleep in sleep_generator:
        try:
            return target()

        # pylint: disable=broad-except
        # This function explicitly must deal with broad exceptions.
        except Exception as exc:
            if not predicate(exc):
                raise
            last_exc = exc
            if on_error is not None:
                on_error(exc)

        now = datetime_helpers.utcnow()

        if deadline_datetime is not None:
            if deadline_datetime <= now:
                six.raise_from(
                    exceptions.RetryError(
                        "Deadline of {:.1f}s exceeded while calling {}".format(
                            deadline, target
                        ),
                        last_exc,
                    ),
                  last_exc,
                )

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:206:


value = None, from_value = _OperationNotComplete()

???
E google.api_core.exceptions.RetryError: Deadline of 1200.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7a01e52550>>), last exception:

:3: RetryError

During handling of the above exception, another exception occurred:

capsys = <_pytest.capture.CaptureFixture object at 0x7f7a01e3def0>
database = <google.cloud.spanner_v1.database.Database object at 0x7f7a01d2c400>

def test_create_backup(capsys, database):
  backup_sample.create_backup(INSTANCE_ID, DATABASE_ID, BACKUP_ID)

backup_sample_test.py:68:


backup_sample.py:41: in create_backup
operation.result(1200)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:129: in result
self._blocking_poll(timeout=timeout, **kwargs)


self = <google.api_core.operation.Operation object at 0x7f7a01e52550>
timeout = 1200, retry = <google.api_core.retry.Retry object at 0x7f7a02170e48>

def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
    """Poll and wait for the Future to be resolved.

    Args:
        timeout (int):
            How long (in seconds) to wait for the operation to complete.
            If None, wait indefinitely.
    """
    if self._result_set:
        return

    retry_ = self._retry.with_deadline(timeout)

    try:
        kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
        retry_(self._done_or_raise)(**kwargs)
    except exceptions.RetryError:
        raise concurrent.futures.TimeoutError(
          "Operation did not complete within the designated " "timeout."
        )

E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout.

.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:110: TimeoutError

spanner: InvalidArgument: 400 Previously received a different request with this seqno. seqno=4 with no concurrency applied

With spanner_v1 version 1.11, I am using this package without any concurrency and yet I am getting back an obscure error that looks to me like an issue with the underlying gRPC library or the coordination in this library

google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4

and that finally results in #10 even after I've removed PingingPool to use the default pool.

Please find the full stack trace below in the details element

======================================================================
ERROR: test_xview_class (admin_docs.test_middleware.XViewMiddlewareTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "Previously received a different request with this seqno. seqno=4"
	debug_error_string = "{"created":"@1580855981.982223904","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Previously received a different request with this seqno. seqno=4","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 89, in execute
    self.__handle_update(self.__get_txn(), sql, args or None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 101, in __handle_update
    res = txn.execute_update(sql, params=params, param_types=get_param_types(params))
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
    metadata=metadata,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
    on_error=on_error,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
    return target()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
    raise ProgrammingError(e.details if hasattr(e, 'details') else e)
spanner.dbapi.exceptions.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 267, in __call__
    self._pre_setup()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 938, in _pre_setup
    self._fixture_setup()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 1165, in _fixture_setup
    self.setUpTestData()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/tests/admin_docs/tests.py", line 11, in setUpTestData
    cls.superuser = User.objects.create_superuser(username='super', password='secret', email='[email protected]')
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 162, in create_superuser
    return self._create_user(username, email, password, **extra_fields)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 145, in _create_user
    user.save(using=self._db)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/base_user.py", line 66, in save
    super().save(*args, **kwargs)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 741, in save
    force_update=force_update, update_fields=update_fields)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 779, in save_base
    force_update, using, update_fields,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 851, in _save_table
    forced_update)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 900, in _do_update
    return filtered._update(values) > 0
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 760, in _update
    return query.get_compiler(self.db).execute_sql(CURSOR)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1462, in execute_sql
    cursor = super().execute_sql(result_type)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1133, in execute_sql
    cursor.execute(sql, params)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/utils.py", line 89, in __exit__
    raise dj_exc_value.with_traceback(traceback) from exc_value
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
    raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_xview_func (admin_docs.test_middleware.XViewMiddlewareTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "Previously received a different request with this seqno. seqno=4"
	debug_error_string = "{"created":"@1580855981.991133481","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Previously received a different request with this seqno. seqno=4","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 89, in execute
    self.__handle_update(self.__get_txn(), sql, args or None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 101, in __handle_update
    res = txn.execute_update(sql, params=params, param_types=get_param_types(params))
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
    metadata=metadata,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
    on_error=on_error,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
    return target()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
    raise ProgrammingError(e.details if hasattr(e, 'details') else e)
spanner.dbapi.exceptions.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 267, in __call__
    self._pre_setup()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 938, in _pre_setup
    self._fixture_setup()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 1165, in _fixture_setup
    self.setUpTestData()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/tests/admin_docs/tests.py", line 11, in setUpTestData
    cls.superuser = User.objects.create_superuser(username='super', password='secret', email='[email protected]')
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 162, in create_superuser
    return self._create_user(username, email, password, **extra_fields)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 145, in _create_user
    user.save(using=self._db)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/base_user.py", line 66, in save
    super().save(*args, **kwargs)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 741, in save
    force_update=force_update, update_fields=update_fields)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 779, in save_base
    force_update, using, update_fields,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 851, in _save_table
    forced_update)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 900, in _do_update
    return filtered._update(values) > 0
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 760, in _update
    return query.get_compiler(self.db).execute_sql(CURSOR)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1462, in execute_sql
    cursor = super().execute_sql(result_type)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1133, in execute_sql
    cursor.execute(sql, params)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/utils.py", line 89, in __exit__
    raise dj_exc_value.with_traceback(traceback) from exc_value
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
    raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_bookmarklets (admin_docs.test_views.AdminDocViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "Previously received a different request with this seqno. seqno=4"
	debug_error_string = "{"created":"@1580855982.000827256","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Previously received a different request with this seqno. seqno=4","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 89, in execute
    self.__handle_update(self.__get_txn(), sql, args or None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 101, in __handle_update
    res = txn.execute_update(sql, params=params, param_types=get_param_types(params))
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 202, in execute_update
    metadata=metadata,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 810, in execute_sql
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
    on_error=on_error,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
    return target()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Previously received a different request with this seqno. seqno=4
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
    raise ProgrammingError(e.details if hasattr(e, 'details') else e)
spanner.dbapi.exceptions.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 267, in __call__
    self._pre_setup()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 938, in _pre_setup
    self._fixture_setup()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/testcases.py", line 1165, in _fixture_setup
    self.setUpTestData()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/tests/admin_docs/tests.py", line 11, in setUpTestData
    cls.superuser = User.objects.create_superuser(username='super', password='secret', email='[email protected]')
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 162, in create_superuser
    return self._create_user(username, email, password, **extra_fields)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/models.py", line 145, in _create_user
    user.save(using=self._db)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/contrib/auth/base_user.py", line 66, in save
    super().save(*args, **kwargs)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 741, in save
    force_update=force_update, update_fields=update_fields)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 779, in save_base
    force_update, using, update_fields,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 851, in _save_table
    forced_update)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/base.py", line 900, in _do_update
    return filtered._update(values) > 0
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/query.py", line 760, in _update
    return query.get_compiler(self.db).execute_sql(CURSOR)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1462, in execute_sql
    cursor = super().execute_sql(result_type)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/models/sql/compiler.py", line 1133, in execute_sql
    cursor.execute(sql, params)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/utils.py", line 89, in __exit__
    raise dj_exc_value.with_traceback(traceback) from exc_value
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/cursor.py", line 93, in execute
    raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_index (admin_docs.test_views.AdminDocViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
 ...
    raise ProgrammingError(e.details if hasattr(e, 'details') else e)
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_missing_docutils (admin_docs.test_views.AdminDocViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
...
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
...
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_model_with_many_to_one (admin_docs.test_views.TestModelDetailView)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
...
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
======================================================================
ERROR: test_model_with_no_backward_relations_render_only_relevant_fields (admin_docs.test_views.TestModelDetailView)
----------------------------------------------------------------------
...<SAME CONTENT>
django.db.utils.ProgrammingError: 400 Previously received a different request with this seqno. seqno=4
----------------------------------------------------------------------
Ran 13 tests in 0.927s
FAILED (errors=45)
Testing against Django installed in '/home/travis/build/orijtech/spanner-orm/django_tests/django/django' with up to 2 processes
Importing application admin_default_site
Importing application admin_docs
Skipping setup of unused database(s): other.
Operations to perform:
  Synchronize unmigrated apps: admin_default_site, admin_docs, auth, contenttypes, messages, sessions, staticfiles
  Apply all migrations: admin, sites
Synchronizing apps without migrations:
  Creating tables...
    Creating table django_content_type
    Creating table auth_permission
    Creating table auth_group
    Creating table auth_user
    Creating table django_session
    Creating table admin_docs_company
    Creating table admin_docs_group
    Creating table admin_docs_family
    Creating table admin_docs_person
    Running deferred SQL...
Running migrations:
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying sites.0001_initial... OK
  Applying sites.0002_alter_domain_unique... OK
System check identified no issues (0 silenced).
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.ABORTED
	details = "Transaction not found"
	debug_error_string = "{"created":"@1580855984.140864504","description":"Error received from peer ipv4:108.177.13.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Transaction not found","grpc_status":10}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "runtests.py", line 507, in <module>
    options.exclude_tags,
  File "runtests.py", line 294, in django_tests
    extra_tests=extra_tests,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 639, in run_tests
    self.teardown_databases(old_config)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/runner.py", line 583, in teardown_databases
    keepdb=self.keepdb,
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/test/utils.py", line 299, in teardown_databases
    connection.creation.destroy_test_db(old_name, verbosity, keepdb)
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/creation.py", line 241, in destroy_test_db
    self.connection.close()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/base.py", line 288, in close
    self._close()
  File "/home/travis/build/orijtech/spanner-orm/django_tests/django/django/db/backends/base/base.py", line 250, in _close
    return self.connection.close()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/connection.py", line 30, in close
    self.commit()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/spanner/dbapi/connection.py", line 60, in commit
    res = self.__txn.commit()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/transaction.py", line 127, in commit
    metadata=metadata,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 1556, in commit
    request, retry=retry, timeout=timeout, metadata=metadata
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
    on_error=on_error,
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
    return target()
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.Aborted: 409 Transaction not found
The command "bash django_test_suite.sh" exited with 1.
cache.2
store build cache

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.