GithubHelp home page GithubHelp logo

googleapis / python-api-core Goto Github PK

View Code? Open in Web Editor NEW
112.0 42.0 81.0 1.7 MB

Home Page: https://googleapis.dev/python/google-api-core/latest

License: Apache License 2.0

Python 96.32% Shell 3.43% Dockerfile 0.25%

python-api-core's Introduction

Core Library for Google Client Libraries

pypi versions

This library is not meant to stand-alone. Instead it defines common helpers used by all Google API clients. For more information, see the documentation.

Supported Python Versions

Python >= 3.7

Unsupported Python Versions

Python == 2.7, Python == 3.5, Python == 3.6.

The last version of this library compatible with Python 2.7 and 3.5 is google-api-core==1.31.1.

The last version of this library compatible with Python 3.6 is google-api-core==2.8.2.

python-api-core's People

Contributors

arithmetic1728 avatar atulep avatar busunkim96 avatar chemelnucfin avatar crwilcox avatar dandhlee avatar daniel-sanche avatar dhermes avatar emar-kar avatar eoogbe avatar gcf-owl-bot[bot] avatar jkwlui avatar kbandes avatar landrito avatar lidizheng avatar lukesneeringer avatar ohmayr avatar parthea avatar plamut avatar rchen152 avatar release-please[bot] avatar renovate-bot avatar software-dov avatar theacodes avatar tseaver avatar tswast avatar vam-google avatar vchudnov-g avatar weiranfang avatar yoshi-automation avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-api-core's Issues

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.

Undeprecate IAM factory helpers

In addition to deprecating legacy role assignments, fd47fda (googleapis/google-cloud-python#9869) deprecated the Policy.user, Policy.service_account, Policy.group, Policy.domain, Policy.all_users, and Policy.authenticated_users entity factory helpers.

ISTM that those helpers should not be deprecated: they hide spelling details from users, and were not part of the "binding assigments" bit being deprecated (the assignable Policy.owners, Policy.editors, Policy.viewers properties): one still has to be able to construct the correctly-spelled entity when using the expected Policy.bindings[<ROLENAME>] spelling.

google.api_core.iam.Policy.__getitem__ does not correctly save empty bindings

We recently tried to upgrade a tool from 1.15.0 to 1.26.1 and found it breaks some code that handles IAM policies. The commit that introduces the bug was released in 1.16.0: fd47fda#diff-7cc73ea72342c139ff54060be9ff25b2f792f9225e0cc0f501dca9dbed9c4741 -

The new __getitem__ implementation returns a new empty set() for roles not in the current policy. But it doesn't save that set in the bindings. So if the user manipulates it, the policy isn't actually updated. That breaks code written like this:

policy = resource.get_iam_policy()
policy['roles/storage.objectAdmin'].add(principal)
bucket.set_iam_policy(policy)

This worked fine on v1.15.0 because of the use of defaultdict. But now, this adds the principal to a set that's not used by the policy.

Something like the following (untested) patch should do the trick:

diff --git a/google/api_core/iam.py b/google/api_core/iam.py
index f130936..d650336 100644
--- a/google/api_core/iam.py
+++ b/google/api_core/iam.py
@@ -136,7 +136,9 @@ class Policy(collections_abc.MutableMapping):
         for b in self._bindings:
             if b["role"] == key:
                 return b["members"]
-        return set()
+
+        self[key] = set()
+        return self[key]

     def __setitem__(self, key, value):
         self.__check_version__()

Use package grpcio-status to parse rich errors in trailing metadata.

Package: https://pypi.org/project/grpcio-status/

Example: https://github.com/grpc/grpc/blob/master/examples/python/errors/client.py#L37

We are from IAM Policy Analyzer team and we've recently found that the python client library does not seems to surface the details in the Error Status properly. We've tried using gcloud and it works as expected. The comparison is as below:

$ gcloud beta asset analyze-iam-policy --project=cai-playground
ERROR: (gcloud.beta.asset.analyze-iam-policy) INVALID_ARGUMENT: Some specified value(s) are invalid.
- '@type': type.googleapis.com/google.rpc.BadRequest
  fieldViolations:
  - description: At least one of resource selector, identity selector or access selector
      needs to  be specified.
    field: analysis_query
$ gcloud auth application-default login
$ python3
 from google.cloud import asset_v1p4beta1
 from google.cloud.asset_v1p4beta1 import AnalyzeIamPolicyRequest, IamPolicyAnalysisQuery
 parent="projects/cai-playground"
 client = asset_v1p4beta1.AssetServiceClient()
 response = client.analyze_iam_policy(request=AnalyzeIamPolicyRequest(analysis_query=IamPolicyAnalysisQuery(parent=parent)))

Traceback (most recent call last):
  File "/usr/local/google/home/aaronlichen/.local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/usr/local/google/home/aaronlichen/.local/lib/python3.8/site-packages/grpc/_channel.py", line 826, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/google/home/aaronlichen/.local/lib/python3.8/site-packages/grpc/_channel.py", line 729, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.INVALID_ARGUMENT
        details = "Some specified value(s) are invalid."
        debug_error_string = "{"created":"@1602024200.721639270","description":"Error received from peer ipv4:74.125.142.95:443","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Some specified value(s) are invalid.","grpc_status":3}"

[AsyncIO] Overview of gRPC AsyncIO integration

AsyncIO introduces async/await keywords. In order to plumb through the asynchronous-ness of functions, the library has to expose async functions on its surface. So, while integrating gRPC AsyncIO, we might need to instrument following classes in order to make gRPC AsyncIO work.

Also, since python-api-core supports all Python versions, we can't use any async/await in existing modules. If needed, the import of new AsyncIO modules will be protected by explicit version check.

A all-in-one draft PR can be found: #22.

  • google.api_core.async_future
  • google.api_core.gapic_v1.config_async
  • google.api_core.gapic_v1.method_async
  • google.api_core.operations_v1.operations_async_client
  • google.api_core.grpc_helpers_async
  • google.api_core.operation_async
  • google.api_core.retry_async

Unit tests included for all new modules under tests/asyncio.

For each module, I will create separate issues and PRs later.

"int not callable" Exception when retry value passed to datastore operation

Thanks for stopping by to let us know something could be better!

PLEASE READ: If you have a support contract with Google, please create an issue in the support console instead of filing on GitHub. This will ensure a timely response.

Please run down the following list and make sure you've tried the usual "quick fixes":

If you are still having issues, please be sure to include as much information as possible:

Environment details

  • OS type and version: Macos 11.1
  • Python version: python 3.7.9
  • pip version: pip 21.0.1
  • google-api-core version:
  • Name: google-api-core
    Version: 1.23.0
    Summary: Google API client core library
    Home-page: https://github.com/googleapis/python-api-core
    Author: Google LLC
    Author-email: [email protected]
    License: Apache 2.0
    Location: .../.../.../.../env/lib/python3.7/site-packages
    Requires: pytz, protobuf, setuptools, google-auth, six, googleapis-common-protos, requests
    Required-by: google-cloud-logging, google-cloud-firestore, google-cloud-datastore, google-cloud-core, google-api-python-client

Steps to reproduce

  1. Create datastore entities list
  2. Call put_multi and pass a non-zero retry value

Code example

    retry = 3
    timeout = 1000
    
    logger.debug('updating database')
    _,timestamp = get_timestamp()
    start_timestamp = timestamp
    result = client.put_multi(entities, retry=retry, timeout=timeout)

Stack trace

'int' object is not callable

Making sure to follow these steps will guarantee the quickest resolution possible.

Thanks!

Synthesis failed for python-api-core

Hello! Autosynth couldn't regenerate python-api-core. 💔

Here's the output from running synth.py:

b'2020-05-27 09:47:58,720 autosynth [INFO] > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/python-api-core\n2020-05-27 09:47:59,238 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore\n2020-05-27 09:47:59,242 autosynth [DEBUG] > Running: git config user.name yoshi-automation\n2020-05-27 09:47:59,245 autosynth [DEBUG] > Running: git config user.email [email protected]\n2020-05-27 09:47:59,248 autosynth [DEBUG] > Running: git config push.default simple\n2020-05-27 09:47:59,251 autosynth [DEBUG] > Running: git branch -f autosynth\n2020-05-27 09:47:59,254 autosynth [DEBUG] > Running: git checkout autosynth\nSwitched to branch \'autosynth\'\n2020-05-27 09:47:59,262 autosynth [DEBUG] > Running: git rev-parse --show-toplevel\n2020-05-27 09:47:59,265 autosynth [DEBUG] > Running: git log -1 --pretty=%H\n2020-05-27 09:47:59,269 autosynth [DEBUG] > Running: git remote get-url origin\n2020-05-27 09:47:59,278 synthtool [ERROR] > Failed executing git clone --single-branch rpc://devrel/cloud/libraries/tools/autosynth /home/kbuilder/.cache/synthtool/autosynth:\n\nCloning into \'/home/kbuilder/.cache/synthtool/autosynth\'...\nfatal: unable to find remote helper for \'rpc\'\n\n2020-05-27 09:47:59,278 autosynth [DEBUG] > Running: git clean -fdx\nTraceback (most recent call last):\n  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n    "__main__", mod_spec)\n  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n    exec(code, run_globals)\n  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>\n    main()\n  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main\n    return _inner_main(temp_dir)\n  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 576, in _inner_main\n    git_source.enumerate_versions(sources, pathlib.Path(temp_dir))\n  File "/tmpfs/src/github/synthtool/autosynth/git_source.py", line 166, in enumerate_versions\n    source_versions = enumerate_versions_for_source(git_source, temp_dir)\n  File "/tmpfs/src/github/synthtool/autosynth/git_source.py", line 131, in enumerate_versions_for_source\n    local_repo_dir = str(synthtool_git.clone(remote))\n  File "/tmpfs/src/github/synthtool/synthtool/sources/git.py", line 83, in clone\n    shell.run(cmd, check=True)\n  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run\n    raise exc\n  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run\n    encoding="utf-8",\n  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run\n    output=stdout, stderr=stderr)\nsubprocess.CalledProcessError: Command \'[\'git\', \'clone\', \'--single-branch\', \'rpc://devrel/cloud/libraries/tools/autosynth\', PosixPath(\'/home/kbuilder/.cache/synthtool/autosynth\')]\' returned non-zero exit status 128.\n'

Google internal developers can see the full log here.

Support Reset Connection as a first class exception in api_core

Is your feature request related to a problem? Please describe.
Users are continuously experiencing reset connection errors which are retried in https://github.com/googleapis/google-api-python-client/releases/tag/v1.8.1. At the moment the error needs to be handled outside of the api_core package instead of being part of one of the supported exceptions in https://github.com/googleapis/python-api-core/blob/master/google/api_core/exceptions.py#L336 and surfaces transport library types when it should be wrapped.

Describe the solution you'd like
Include Reset Connection as a first class citizen in https://github.com/googleapis/python-api-core/blob/master/google/api_core/exceptions.py#L336

Describe alternatives you've considered
Including reset connection into library retry code manually for each manual library. It can instead be handled generally in api_core instead.

cc: @crwilcox @busunkim96 @tritone @shollyman

Core: Race condition in bidi.BackgroundConsumer?

Environment details

Any OS, any Python version, google-api-core==1.9.0.

Steps to reproduce

No actual steps, spotted this in bidi.py while working on a different issue. This is how the "Am I paused?" snippet in BackgroundConsumer._thread_main() looks like:

with self._wake:
    if self._paused:
        _LOGGER.debug("paused, waiting for waking.")
        self._wake.wait()
        _LOGGER.debug("woken.")

_LOGGER.debug("waiting for recv.")
response = self._bidi_rpc.recv()

The _paused state can be set / cleared by the pause() and resume() methods, respectively.

If paused, the code snippet from above blocks at the _wake.wait() call, and is unblocked some time after _wake.notifyAll() is invoked in the resume() method. When resume() method notifies the waiting threads and releases the internal lock held by the self._wake condition, _wake.wait() tries to re-obtain that lock.

Now, suppose that some other thread invokes the pause() method in the meantime, and the latter obtains the self._wake's lock before _wake.wait() can grab it. The _paused flag will again be set to True, and when _wake.wait() finally acquires the lock and resumes, the code will invoke _bidi_rpc.recv() in the paused state.

For this reason the self._paused condition must be checked in a loop, and not in a single if statement, as the docs on threading.Condition state:

The while loop checking for the application’s condition is necessary because wait() can return after an arbitrary long time, and the condition which prompted the notify() call may no longer hold true. This is inherent to multi-threaded programming.

How much of a problem this is, i.e. invoking recv() in a paused state?


Edit:
The same can happen when exiting the with self._wake block. The lock gets released, pause() can acquire it and set _self._paused to True, but _bidi_rpc.recv() will nevertheless be called, because it is placed outside the with block.


Code example

I created a demo script that demonstrates why if <condition> is not sufficient, and subject to race conditions (requires Python 3.6+). The "worker" thread waits until a shared global number is set to 10, while five "changer threads" randomly change that number, and notify other threads if they change it to 10.

Sometimes the "worker" is invoked too late, and another "changer" thread changes 10 to something else, causing the "worker" to continue running when the target condition is no longer true. Replacing if not <condition> with while not <condition> gets rid of the bug.

...
INFO     [2019-04-29 23:09:20,327] Thread-Changer-0: Changing number from  9 to 10
INFO     [2019-04-29 23:09:20,327] Thread-Changer-0: New number == 10, notifying others
INFO     [2019-04-29 23:09:20,327] Thread-Worker   : I see the number 10
INFO     [2019-04-29 23:09:20,327] Thread-Changer-1: Changing number from 10 to  4
INFO     [2019-04-29 23:09:20,327] Thread-Changer-4: Changing number from  4 to 10
INFO     [2019-04-29 23:09:20,327] Thread-Changer-4: New number == 10, notifying others
INFO     [2019-04-29 23:09:20,328] Thread-Changer-2: Changing number from 10 to  3
INFO     [2019-04-29 23:09:21,328] Thread-Changer-3: Changing number from  3 to  9
INFO     [2019-04-29 23:09:21,328] Thread-Changer-1: Changing number from  9 to  6
INFO     [2019-04-29 23:09:21,328] Thread-Changer-4: Changing number from  6 to  4
INFO     [2019-04-29 23:09:21,328] Thread-Worker   : the number is not 10, will wait for condition
INFO     [2019-04-29 23:09:21,328] Thread-Changer-0: Changing number from  4 to  6
INFO     [2019-04-29 23:09:21,329] Thread-Changer-2: Changing number from  6 to  9
INFO     [2019-04-29 23:09:22,329] Thread-Changer-3: Changing number from  9 to 10
INFO     [2019-04-29 23:09:22,329] Thread-Changer-3: New number == 10, notifying others
INFO     [2019-04-29 23:09:22,329] Thread-Changer-1: Changing number from 10 to  5
INFO     [2019-04-29 23:09:22,329] Thread-Worker   : I see the number 5
...

The expected behavior is that the worker thread always prints out "I see the number 10" (the desired condition), and never "I see <non 10>". In other words, it should only proceed when the condition number == 10 is currently fulfilled.

In the bidi.py case, this would translate to BackgroundConsumer._thread_main() only resuming when self._paused == False, and never resuming when self._paused == True.

AttributeError: module 'grpc.experimental.aio' has no attribute 'StreamUnaryCall'

I am using Google Document AI cloud service API and i am facing the issue and struck with the error

AttributeError: module 'grpc.experimental.aio' has no attribute 'StreamUnaryCall'

Environment details

google-api-core==1.22.1
google-auth==1.20.1
google-cloud==0.34.0
google-cloud-documentai==0.2.0
google-cloud-vision==1.0.0
googleapis-common-protos==1.52.0
grpcio==1.31.0

Code example

pip install google-cloud
pip install google-cloud-documentai
from google.cloud import documentai_v1beta2 

Thanks!

add client_cert_source to ClientOptions

google-api-go-client uses WithClientCertSource in ClientOptions to provide client cert/key, we need to add a client_cert_source to ClientOptions in python as well.

Updating the pubsub client breaks because it needs a newer version of protobuf

Environment details

  • OS type and version: Windows 10
  • Python version: 3.6.9
  • pip version: 20.0.2
  • google-api-core version: 1.16.0

Steps to reproduce

  1. Install google-cloud-pubsub==1.6.0 and protobuf==3.10.0
  2. Import the PushConfig object to create a push topic

Quick fix

Reinstalling protobuf fixes this issue, to my knowledge, 3.12 seems enough.

Stack trace

Traceback (most recent call last):
  File "C:\Users\Sollum\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\Sollum\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\src\main.py", line 19, in <module>
    from src.procedures import create_app
  File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\src\procedures.py", line 15, in <module>
    from src.utils.decorators import memoize
  File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\src\utils\decorators.py", line 14, in <module>
    from google.cloud.pubsub_v1.proto.pubsub_pb2 import PushConfig
  File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\venv\lib\site-packages\google\cloud\pubsub_v1\__init__.py", line 17, in <module>
    from google.cloud.pubsub_v1 import types
  File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\venv\lib\site-packages\google\cloud\pubsub_v1\types.py", line 32, in <module>
    from google.cloud.pubsub_v1.proto import pubsub_pb2
  File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\venv\lib\site-packages\google\cloud\pubsub_v1\proto\pubsub_pb2.py", line 30, in <module>
    create_key=_descriptor._internal_create_key,
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'

Segmentation Fault When Generating Field Masks For Protos with Optional Fields

Environment details

  • OS type and version: gLinux
  • Python version: 3.7.0
  • pip version: 10.0.1
  • google-api-core version: 1.22.1
  • protoc version: 3.12.3

Steps to reproduce

  1. Write the following to foo.proto:
syntax = "proto3";
 
message Nested {
  optional bool test = 1;
}
 
message Foo {
  optional string name = 1;
  Nested nested = 3;
}
  1. Compile with protoc: protoc --experimental_allow_proto3_optional --python_out=. foo.proto

  2. Write the following to repro.py:

from foo_pb2 import Foo
from google.api_core import protobuf_helpers
 
foo = Foo()
foo.name = 'name'
foo.nested.test = False
 
field_mask = protobuf_helpers.field_mask(None, foo)

Code example

# example
  1. Install latest google-api-core: python -m pip install google-api-core
  2. Run: python repro.py

Stack trace

Using faulthandler:

Fatal Python error: Segmentation fault

Current thread 0x00007eff5260d180 (most recent call first):
  File "/venvs/simply-local-3.7.0/lib/python3.7/site-packages/google/api_core/protobuf_helpers.py", line 314 in field_mask
  File "repro.py", line 9 in <module>
Segmentation fault

'grpc.experimental.aio' has no attribute 'StreamUnaryCall' due to PR#29

We observe a stack trace on one of our clusters due to a package upgrade, possibly introduced by the new release of python-api-core, which we don't pin to a specific version.

Due to PR#29, we now see the following error:

'grpc.experimental.aio' has no attribute 'StreamUnaryCall'

We will pin our version to the previous one, but documenting this here as it can affect other customers. Probably we should be pinning a more recent version of google.cloud libraries that are compatible, we're still researching this.

Environment details

  • OS type and version: image 1.4-ubuntu18 (GCP dataproc).
  • Python version: 3.6
  • pip version: pip --version
  • google-api-core version: 1.19.0

Steps to reproduce

  1. See stack trace below, it happens on import bigquery client.

Stack trace

    from google.cloud import bigquery, bigquery_storage, storage
  File "/opt/conda/default/lib/python3.6/site-packages/google/cloud/bigquery/__init__.py", line 35, in <module>
    from google.cloud.bigquery.client import Client
  File "/opt/conda/default/lib/python3.6/site-packages/google/cloud/bigquery/client.py", line 58, in <module>
    from google.cloud.bigquery import _pandas_helpers
  File "/opt/conda/default/lib/python3.6/site-packages/google/cloud/bigquery/_pandas_helpers.py", line 25, in <module>
    from google.cloud import bigquery_storage_v1beta1
  File "/opt/conda/default/lib/python3.6/site-packages/google/cloud/bigquery_storage_v1beta1/__init__.py", line 26, in <module>
    from google.cloud.bigquery_storage_v1beta1 import client
  File "/opt/conda/default/lib/python3.6/site-packages/google/cloud/bigquery_storage_v1beta1/client.py", line 24, in <module>
    import google.api_core.gapic_v1.method
  File "/opt/conda/default/lib/python3.6/site-packages/google/api_core/gapic_v1/__init__.py", line 26, in <module>
    from google.api_core.gapic_v1 import method_async  # noqa: F401
  File "/opt/conda/default/lib/python3.6/site-packages/google/api_core/gapic_v1/method_async.py", line 20, in <module>
    from google.api_core import general_helpers, grpc_helpers_async
  File "/opt/conda/default/lib/python3.6/site-packages/google/api_core/grpc_helpers_async.py", line 145, in <module>
    class _WrappedStreamUnaryCall(_WrappedUnaryResponseMixin, _WrappedStreamRequestMixin, aio.StreamUnaryCall):
AttributeError: module 'grpc.experimental.aio' has no attribute 'StreamUnaryCall'

Synthesis failed for python-api-core

Hello! Autosynth couldn't regenerate python-api-core. 💔

Here's the output from running synth.py:

b'2020-05-29 05:13:16,306 autosynth [INFO] > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/python-api-core\n2020-05-29 05:13:16,841 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore\n2020-05-29 05:13:16,844 autosynth [DEBUG] > Running: git config user.name yoshi-automation\n2020-05-29 05:13:16,847 autosynth [DEBUG] > Running: git config user.email [email protected]\n2020-05-29 05:13:16,850 autosynth [DEBUG] > Running: git config push.default simple\n2020-05-29 05:13:16,853 autosynth [DEBUG] > Running: git branch -f autosynth\n2020-05-29 05:13:16,856 autosynth [DEBUG] > Running: git checkout autosynth\nSwitched to branch \'autosynth\'\n2020-05-29 05:13:16,864 autosynth [DEBUG] > Running: git rev-parse --show-toplevel\n2020-05-29 05:13:16,867 autosynth [DEBUG] > Running: git log -1 --pretty=%H\n2020-05-29 05:13:16,870 autosynth [DEBUG] > Running: git remote get-url origin\n2020-05-29 05:13:16,880 synthtool [ERROR] > Failed executing git clone --single-branch rpc://devrel/cloud/libraries/tools/autosynth /home/kbuilder/.cache/synthtool/autosynth:\n\nCloning into \'/home/kbuilder/.cache/synthtool/autosynth\'...\nfatal: unable to find remote helper for \'rpc\'\n\n2020-05-29 05:13:16,880 autosynth [DEBUG] > Running: git clean -fdx\nTraceback (most recent call last):\n  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n    "__main__", mod_spec)\n  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code\n    exec(code, run_globals)\n  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>\n    main()\n  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main\n    return _inner_main(temp_dir)\n  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 576, in _inner_main\n    git_source.enumerate_versions(sources, pathlib.Path(temp_dir))\n  File "/tmpfs/src/github/synthtool/autosynth/git_source.py", line 166, in enumerate_versions\n    source_versions = enumerate_versions_for_source(git_source, temp_dir)\n  File "/tmpfs/src/github/synthtool/autosynth/git_source.py", line 131, in enumerate_versions_for_source\n    local_repo_dir = str(synthtool_git.clone(remote))\n  File "/tmpfs/src/github/synthtool/synthtool/sources/git.py", line 83, in clone\n    shell.run(cmd, check=True)\n  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run\n    raise exc\n  File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run\n    encoding="utf-8",\n  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run\n    output=stdout, stderr=stderr)\nsubprocess.CalledProcessError: Command \'[\'git\', \'clone\', \'--single-branch\', \'rpc://devrel/cloud/libraries/tools/autosynth\', PosixPath(\'/home/kbuilder/.cache/synthtool/autosynth\')]\' returned non-zero exit status 128.\n'

Google internal developers can see the full log here.

1.17.0 breaking pubsub

Hi all,
Just a heads-up - 1.17.0 appears to be breaking the current pubsub client. Lots of time spent debugging this, hopefully it will help some other poor bastards out there:

DEBUG:google.api_core.retry:Retrying due to 503 The service was unable to fulfill your request. Please try again. [code=8a75], sleeping 0.2s ...
ERROR:grpc._channel:Exception iterating requests!
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 195, in consume_request_iterator
    request = next(request_iterator)
ValueError: generator already executing
DEBUG:google.api_core.bidi:Thread-ConsumeBidirectionalStream caught error None Exception iterating requests! and will exit. Generally this is due to the RPC itself being cancelled and the error will be surfaced to the calling code.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 144, in error_remapped_callable
    return _StreamingResponseIterator(result)
  File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 72, in __init__
    self._stored_first_result = six.next(self._wrapped)
  File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 416, in __next__
    return self._next()
  File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 706, in _next
    raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
    status = StatusCode.UNKNOWN
    details = "Exception iterating requests!"
    debug_error_string = "None"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/google/api_core/bidi.py", line 637, in _thread_main
    self._bidi_rpc.open()
  File "/usr/local/lib/python3.7/site-packages/google/api_core/bidi.py", line 280, in open
    call = self._start_rpc(iter(request_generator), metadata=self._rpc_metadata)
  File "/usr/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/gapic/subscriber_client.py", line 1076, in streaming_pull
    requests, retry=retry, timeout=timeout, metadata=metadata
  File "/usr/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
    return wrapped_func(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
    on_error=on_error,
  File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
    return target()
  File "/usr/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 146, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.Unknown: None Exception iterating requests!
INFO:google.api_core.bidi:Thread-ConsumeBidirectionalStream exiting
INFO:google.cloud.pubsub_v1.subscriber._protocol.heartbeater:Thread-Heartbeater exiting.
INFO:google.cloud.pubsub_v1.subscriber._protocol.leaser:Thread-LeaseMaintainer exiting.

Pinning to 1.16.0 in requirements.txt fixes the issue.

Async wrap_stream_errors causing type error when running against emulator but not against live endpoint.

googleapis/python-firestore#286

While investigating the above issue, I think there may be an error in `def _wrap_stream_errors. Specifically, under emulation call is never an instance of any of the specified types. However, if rather than raising a type error I retuern the call as is, just instantiating it, all seems to work as expected.


def _wrap_stream_errors(callable_):
    """Map errors for streaming RPC async callables."""
    grpc_helpers._patch_callable_name(callable_)

    @functools.wraps(callable_)
    async def error_remapped_callable(*args, **kwargs):

        call = callable_(*args, **kwargs)
        print(f"IN ASYNC REMAPPED CALLABLE {type(call)}")
        if isinstance(call, aio.UnaryStreamCall):
            call = _WrappedUnaryStreamCall().with_call(call)
        elif isinstance(call, aio.StreamUnaryCall):
            call = _WrappedStreamUnaryCall().with_call(call)
        elif isinstance(call, aio.StreamStreamCall):
            call = _WrappedStreamStreamCall().with_call(call)
        else:
            return call # NEW LINE
            #raise TypeError('Unexpected type of call %s' % type(call))

        await call.wait_for_connection()
        return call

    return error_remapped_callable

Also, it seems the sync method of this was changed in the same month. Is it possible the two implementations aren't a match? https://github.com/googleapis/python-api-core/blob/master/google/api_core/grpc_helpers.py#L132

Here is the code that will show sync working, but async failing.

import asyncio
from google.cloud.firestore import AsyncClient, Client
import os
import time

# ❯ gcloud beta emulators firestore start --host-port=localhost:8080
os.environ["FIRESTORE_EMULATOR_HOST"] = "localhost:8080"

async_client = AsyncClient()
client = Client()
document_name = f"test-{time.time()}"

async def document_set():
    client.collection("test").document("test_sync").set({"message": "Hello World!"})
  
    await async_client.collection("test").document(document_name).set({"message": "Hello World!"})
    doc = await async_client.collection("test").document(document_name).get()
    message = doc.get("message")
    print(f"Expect Hello World: {message}")


if __name__ == "__main__":
    loop = asyncio.get_event_loop()
    loop.run_until_complete(document_set())

_create_composite_credentials creates a non async requests

In grpc_helpers.py:

def _create_composite_credentials(
        credentials=None,
        credentials_file=None,
        default_scopes=None,
        scopes=None,
        ssl_credentials=None,
        quota_project_id=None,
        default_host=None):
    # removed stuff before this
    request = google.auth.transport.requests.Request()

    # Create the metadata plugin for inserting the authorization header.

    # TODO: remove this if/else once google-auth >= 1.25.0 is required
    if _GOOGLE_AUTH_HAS_DEFAULT_SCOPES_AND_DEFAULT_HOST:
        metadata_plugin = google.auth.transport.grpc.AuthMetadataPlugin(
            credentials, request, default_host=default_host,
        )
    else:
        metadata_plugin = google.auth.transport.grpc.AuthMetadataPlugin(
            credentials, request
        )

    # Create a set of grpc.CallCredentials using the metadata plugin.
    google_auth_credentials = grpc.metadata_call_credentials(metadata_plugin)

    if ssl_credentials is None:
        ssl_credentials = grpc.ssl_channel_credentials()

    # Combine the ssl credentials and the authorization credentials.
    return grpc.composite_channel_credentials(
        ssl_credentials, google_auth_credentials
    )

This method gets called from grpc_helpers_async.py:

def create_channel(
        target,
        credentials=None,
        scopes=None,
        ssl_credentials=None,
        credentials_file=None,
        quota_project_id=None,
        default_scopes=None,
        default_host=None,
        **kwargs):

    composite_credentials = grpc_helpers._create_composite_credentials(
        credentials=credentials,
        credentials_file=credentials_file,
        scopes=scopes,
        default_scopes=default_scopes,
        ssl_credentials=ssl_credentials,
        quota_project_id=quota_project_id,
        default_host=default_host
    )

    return aio.secure_channel(target, composite_credentials, **kwargs)

But the request = google.auth.transport.requests.Request() is based on requests which isn't async, google.auth.transport does have a aiohttp transport https://github.com/googleapis/google-auth-library-python/blob/master/google/auth/transport/_aiohttp_requests.py

Latest release is breaking deployments on GAE Flex

Environment details

  • OS type and version: GCP App Engine Python Flex
  • Python version: 3.6
  • pip version: pip --version
  • google-api-core version: 1.20.0

Steps to reproduce

  1. Deploy GAE app using default docker container to GAE Flex Python

Stack trace

File "/env/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() 
File "/env/lib/python3.6/site-packages/gunicorn/workers/base.py", line 119, in init_process self.load_wsgi() 
File "/env/lib/python3.6/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi self.wsgi = self.app.wsgi() 
File "/env/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() 
File "/env/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 49, in load return self.load_wsgiapp() 
File "/env/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp return util.import_app(self.app_uri) 
File "/env/lib/python3.6/site-packages/gunicorn/util.py", line 358, in import_app mod = importlib.import_module(module) 
File "/env/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) 
File "<frozen importlib._bootstrap>", line 994, in _gcd_import 
File "<frozen importlib._bootstrap>", line 971, in _find_and_load 
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked 
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked 
File "<frozen importlib._bootstrap_external>", line 678, in exec_module 
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed 
File "/home/vmagent/app/main.py", line 14, in <module> from google.cloud import logging as google_logging 
File "/env/lib/python3.6/site-packages/google/cloud/logging/__init__.py", line 22, in <module> from google.cloud.logging.client import Client F
ile "/env/lib/python3.6/site-packages/google/cloud/logging/client.py", line 21, in <module> from google.cloud.logging import _gapic 
File "/env/lib/python3.6/site-packages/google/cloud/logging/_gapic.py", line 20, in <module> from google.cloud.logging_v2.gapic.config_service_v2_client import ConfigServiceV2Client 
File "/env/lib/python3.6/site-packages/google/cloud/logging_v2/__init__.py", line 18, in <module> from google.cloud.logging_v2.gapic import config_service_v2_client 
File "/env/lib/python3.6/site-packages/google/cloud/logging_v2/gapic/config_service_v2_client.py", line 25, in <module> import google.api_core.gapic_v1.client_info 
File "/env/lib/python3.6/site-packages/google/api_core/gapic_v1/__init__.py", line 26, in <module> from google.api_core.gapic_v1 import method_async # noqa: F401 
File "/env/lib/python3.6/site-packages/google/api_core/gapic_v1/method_async.py", line 20, in <module> from google.api_core import general_helpers, grpc_helpers_async 
File "/env/lib/python3.6/site-packages/google/api_core/grpc_helpers_async.py", line 145, in <module> class _WrappedStreamUnaryCall(_WrappedUnaryResponseMixin, _WrappedStreamRequestMixin, aio.StreamUnaryCall): AttributeError: module 'grpc.experimental.aio' has no attribute 'StreamUnaryCall'

See also this SO post

TypeError: 'float' object is not callable

How to reproduce:
(1) clone https://github.com/googleapis/python-cloudbuild and checkout sijun branch.
(2) run python -m nox -s unit-3.6 -- -s

Exception thrown:

    def _apply_decorators(func, decorators):
        """Apply a list of decorators to a given function.
    
        ``decorators`` may contain items that are ``None`` or ``False`` which will
        be ignored.
        """
        decorators = filter(_is_not_none_or_false, reversed(decorators))
    
        for decorator in decorators:
>           func = decorator(func)
E           TypeError: 'float' object is not callable

Description of the bug:

google.api_core.gapic_v1.method._GapicCallable.__call__ calls _determine_timeout to create an timeout_ object, then passes it to _apply_decorators(self._target, [retry, timeout_]).

In my example, _determine_timeout(600.0, 600.0, None) creates/returns 600.0, which is a float object, when it is passed to _apply_decorators, the 'float' object is not callable is raised.

_determine_timeout should always return a Timeout object.

Misleading InternalServerError error on deserializing response

The following error is generated when a response from the RPC call cannot be deserlialized:

  File "local/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
    return wrapped_func(*args, **kwargs)
  File "local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.InternalServerError: 500 Exception deserializing response!

This message is misleading, since there was no InternalServerError and code 500 response returned from the server, the actual RPC call completed just fine with HTTP code 200. This error happens when a wrong version of the proto is used to deserialize a response. So, instead of simply updating the client library to the latest version, the customer is led to believe that this error is related to a backend service failure.

gRPC Async Helper seems incompatible when using firestore emulator

googleapis/python-firestore#286

While investigating the above issue, I think there may be an error in def _wrap_stream_errors. Specifically, under emulation call is never an instance of any of the specified types. However, if rather than raising a type error I retuern the call as is, just instantiating it, all seems to work as expected.

def _wrap_stream_errors(callable_):
    """Map errors for streaming RPC async callables."""
    grpc_helpers._patch_callable_name(callable_)

    @functools.wraps(callable_)
    async def error_remapped_callable(*args, **kwargs):

        call = callable_(*args, **kwargs)
        print(f"IN ASYNC REMAPPED CALLABLE {type(call)}")
        if isinstance(call, aio.UnaryStreamCall):
            call = _WrappedUnaryStreamCall().with_call(call)
        elif isinstance(call, aio.StreamUnaryCall):
            call = _WrappedStreamUnaryCall().with_call(call)
        elif isinstance(call, aio.StreamStreamCall):
            call = _WrappedStreamStreamCall().with_call(call)
        else:
            return call # NEW LINE
            #raise TypeError('Unexpected type of call %s' % type(call))

        await call.wait_for_connection()
        return call

    return error_remapped_callable

Also, it seems the sync method of this was changed in the same month. Is it possible the two implementations aren't a match? https://github.com/googleapis/python-api-core/blob/master/google/api_core/grpc_helpers.py#L132

Here is the code that will show sync working, but async failing.

import asyncio
from google.cloud.firestore import AsyncClient, Client
import os
import time

# ❯ gcloud beta emulators firestore start --host-port=localhost:8080
os.environ["FIRESTORE_EMULATOR_HOST"] = "localhost:8080"

async_client = AsyncClient()
client = Client()
document_name = f"test-{time.time()}"

async def document_set():
    client.collection("test").document("test_sync").set({"message": "Hello World!"})
  
    await async_client.collection("test").document(document_name).set({"message": "Hello World!"})
    doc = await async_client.collection("test").document(document_name).get()
    message = doc.get("message")
    print(f"Expect Hello World: {message}")


if __name__ == "__main__":
    loop = asyncio.get_event_loop()
    loop.run_until_complete(document_set())

Core: Missing `grpc` import in `google.api_core.gapic_v1`

Environment details

  1. google.api_core.gapic_v1
  2. macOS Mojave - version 10.14.6
  3. Python 3.7.4
  4. google-api-core==1.16.0

Steps to reproduce

  1. Attempt to import google.api_core.gapic_v1 import client_info
  2. Failure

Code example

from google.api_core.gapic_v1 import client_info 

Stack trace

In [3]: from google.api_core.gapic_v1 import client_info                                                                                                                                      
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-3-bcdafc433b8d> in <module>
----> 1 from google.api_core.gapic_v1 import client_info

~/Envs/google-api/lib/python3.7/site-packages/google/api_core/gapic_v1/__init__.py in <module>
     14 
     15 from google.api_core.gapic_v1 import client_info
---> 16 from google.api_core.gapic_v1 import config
     17 from google.api_core.gapic_v1 import method
     18 from google.api_core.gapic_v1 import routing_header

~/Envs/google-api/lib/python3.7/site-packages/google/api_core/gapic_v1/config.py in <module>
     21 import collections
     22 
---> 23 import grpc
     24 import six
     25 

ModuleNotFoundError: No module named 'grpc'

from_grpc fails with got an unexpected keyword argument 'retry'

A call to from_grpc fails with

Traceback (most recent call last):
  File "deploy.py", line 43, in <module>
    main()
  File "deploy.py", line 36, in main
    if op.done() == True:
  File "/home/rosariod/.virtualenv/automl/lib/python3.6/site-packages/google/api_core/operation.py", line 170, in done
    self._refresh_and_update(retry)
  File "/home/rosariod/.virtualenv/automl/lib/python3.6/site-packages/google/api_core/operation.py", line 157, in _refresh_and_update
    self._operation = self._refresh(retry=retry)
TypeError: _refresh_grpc() got an unexpected keyword argument 'retry'

It seems like it sets _refresh_grpc as self._refresh for the Operation object https://github.com/googleapis/python-api-core/blob/master/google/api_core/operation.py#L299 but when _refresh_and_update is called by done, it gets and passes down to _refresh_grpc a retry param https://github.com/googleapis/python-api-core/blob/master/google/api_core/operation.py#L157 that _refresh_grpc doesn't know how to handle https://github.com/googleapis/python-api-core/blob/master/google/api_core/operation.py#L251 causing the error above.

It might be that the retry param could be correctly handled when the Operation object is created by from_gapic because in that case self._refresh is set to operations_client.get_operation https://github.com/googleapis/python-api-core/blob/master/google/api_core/operation.py#L325 which might be aware of the retry param but doesn't seem to be the case for from_grpc and from_http_json.

Drop use of pytz

From what I can tell, pytz is only used to access a UTC time zone, and to create fixed offsets in the tests. Both of these use cases are supported in the Python standard library datetime module in all supported versions of Python 3.

Since pytz is deprecated or semi-deprecated, it would be a good idea to remove the pytz dependency as soon as possible. It seems that the master branch has not dropped Python 2.7 support yet, so I think the prudent course of action would be to either drop Python 2.7 support entirely in the next release, and drop the pytz dependency along with it.

If you cannot drop Python 2.7 support, it's fairly trivial to write a UTC object for Python 2.7 compatibility, or you can add a 2.7-only dependency on python-dateutil to get access to dateutil.tz.UTC and dateutil.tz.tzoffset.

The only downside to dropping pytz is that if a user is counting on your methods generating datetime objects with pytz.UTC attached, because they are doing something like dt.tzinfo.localize(something). This is because pytz has its own non-standard time zone interface, and other tzinfo providers don't have the same API. This would only happen if someone is doing something like this:

stamp = datetime_helpers.DatetimeWithNanoseconds.from_rfc3339("2016-12-20T21:13:47.123456789Z")
dt = stamp.tzinfo.localize(datetime(2020, 1, 1))

It seems unlikely that anyone is counting on this, and it's easy enough to fix if they are, particularly for UTC objects.

If you are very worried about it, I have a pytz-deprecation-shim module that provides the same API as pytz, but can be used as a normal tzinfo, and raises deprecation warnings whenever the pytz-specific methods are used. I believe that is probably a heavier dependency than you need for these purposes (it also works in Python 2.7).

CC: @geofft

Improving API Core gRPC error reporting

While looking into googleapis/python-firestore#4 it seems gRPC reports child error details. While this is helpful, it can be a bit misleading to a user, as it will pair things like 503 (unavailable) with the text 'deadline exceeded' (error code 504) which seems strange.

I reached out to @lidizheng to discuss this and they brought up using debug_error_string: https://github.com/grpc/grpc/blob/d3e97d953b9a94d017d76a44b780bb5ca48e5840/src/python/grpcio/grpc/_channel.py#L80

We could potentially add rpc_exc.debug_error_string() to the exceptions formed at from_grpc_error:

rpc_exc.code(), rpc_exc.details(), errors=(rpc_exc,), response=rpc_exc

Note: this could be verbose. An example raised:

{"created":"@1615494939.757345505","description":"xds call failed","file":"src/core/ext/xds/xds_client.cc","file_line":1260}
E0311 20:35:39.757768839  443158 xds_cluster_resolver.cc:742] [xds_cluster_resolver_lb 0x7ffb5c005550] discovery mechanism 0 xds watcher reported error: {"created":"@1615494939.757345505","description":"xds call failed","file":"src/core/ext/xds/xds_client.cc","file_line":1260}

Though in the case of the bug raised the error would look more like "receiving error from server, which is "Deadline Exceeded"."

pip install fails with grpc extra

I filed this as a bug with pip because I'm pretty sure it is with them, but their devs closed my issue immediately and said to raise it with you guys. The following dockerfile reproduces the issue. When I run this, it blows up in a weird way, with pip trying to download and resolve dependencies every version of the project.

FROM debian:bullseye-slim

ENV VIRTUAL_ENV=/venv

RUN apt-get update \
    && apt-get install --no-install-recommends --allow-unauthenticated -y \
        python3.9-dev \
        python3.9-venv \
        python3-pip \
    && python3.9 -m venv ${VIRTUAL_ENV} \
    && ${VIRTUAL_ENV}/bin/pip install --upgrade pip \
    && ${VIRTUAL_ENV}/bin/pip3 install wheel \
    && mkdir wheels \
    && mkdir build

RUN printf "from setuptools import setup\nsetup(install_requires=['google-api-core[grpc]'])\n" > /build/setup.py \
    && printf "\nPython version: %s\n" "$(${VIRTUAL_ENV}/bin/python3.9 --version)" \
    && printf "\nPip version: %s\n\n" "$(${VIRTUAL_ENV}/bin/pip3 --version)" \
    && ${VIRTUAL_ENV}/bin/pip3 wheel --no-cache-dir --wheel-dir=/wheels -e /build \
    && ${VIRTUAL_ENV}/bin/pip3 install --no-cache-dir /wheels/*

Don't use `pkg_resources.get_distribution(..).version`

Environment details

  • OS type and version: Alpine Linux edge
  • Python version: 3.8
  • pip version: Irrelevant, pure system packages used
  • google-api-core version: 1.16.0

Issue

I'm trying to run the tests with the end goal being packaging this for Alpine Linux.
All the dependencies are installed and I'm running the tests with PYTHONPATH="$PWD/build/lib" pytest so I can run the tests without having the package installed yet.

I'm probably just doing something wrong, but every test file fails with the following:

_____________________________________________ ERROR collecting tests/unit/test_bidi.py ______________________________________________
tests/unit/test_bidi.py:24: in <module>
    from google.api_core import bidi
build/lib/google/api_core/__init__.py:23: in <module>
    __version__ = get_distribution("google-api-core").version
/usr/lib/python3.8/site-packages/pkg_resources/__init__.py:482: in get_distribution
    dist = get_provider(dist)
/usr/lib/python3.8/site-packages/pkg_resources/__init__.py:358: in get_provider
    return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
/usr/lib/python3.8/site-packages/pkg_resources/__init__.py:901: in require
    needed = self.resolve(parse_requirements(requirements))
/usr/lib/python3.8/site-packages/pkg_resources/__init__.py:787: in resolve
    raise DistributionNotFound(req, requirers)
E   pkg_resources.DistributionNotFound: The 'google-api-core' distribution was not found and is required by the application

client_options.from_dict should accept other mapping types

Raised by @plamut in googleapis/google-api-python-client#829.

Is there a reason for accepting dicts only? Someone might prefer having the options stored in a different dict flavor, e.g. defaultdict, and accepting mappings in general would represent a usability improvement to them.

Documentation in this library needs to be updated. See https://googleapis.dev/python/google-api-core/latest/client_options.html

Clients should also be updated to accept mapping types:

 if isinstance(client_options, six.moves.collections_abc.Mapping):

instead of

if type(client_options) == dict:

Spelling errors in retry.py docs

Hi,

I have found the following spelling errors in retry.py docs while I am reading the code:

        initial (float): The minimum a,out of time to delay in seconds. This
            must be greater than 0.
        maximum (float): The maximum amout of time to delay in seconds.

which should be:

        initial (float): The minimum amount of time to delay in seconds. This
            must be greater than 0.
        maximum (float): The maximum amount of time to delay in seconds.

I will open a PR to fix it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.