GithubHelp home page GithubHelp logo

modzy / sdk-python Goto Github PK

View Code? Open in Web Editor NEW
24.0 6.0 3.0 1.39 MB

Python library for Modzy Machine Learning Operations (MLOps) Platform

License: Apache License 2.0

Makefile 0.78% Python 99.22%
mlops python deployment drift-detection explainable-ai kuberenetes machine-learning machine-learning-operations microservices model-deployment model-serving production-machine-learning serving docker ai-security api-client

sdk-python's Introduction

Installation

Install Modzy's Python SDK with PIP

  pip install modzy-sdk

Usage/Examples

Initializing the SDK

Initialize your client by authenticating with an API key. You can download an API Key from your instance of Modzy.

from modzy import ApiClient

# Sets BASE_URL and API_KEY values
# Best to set these as environment variables
BASE_URL = "Valid Modzy URL" # e.g., "https://trial.modzy.com"
API_KEY = "Valid Modzy API Key" # e.g., "JbFkWZMx4Ea3epIrxSgA.a2fR36fZi3sdFPoztAXT"

client = ApiClient(base_url=BASE_URL, api_key=API_KEY)

Running Inferences

Raw Text Inputs

Submit an inference job to a text-based model by providing the model ID, version, and raw input text:

# Creates a dictionary for text input(s)
sources = {}

# Adds any number of inputs
sources["first-phone-call"] = {
    "input.txt": "Mr Watson, come here. I want to see you.",
}

# Submit the text to v1.0.1 of a Sentiment Analysis model, and to make the job explainable, change explain=True
job = client.jobs.submit_text("ed542963de", "1.0.1", sources, explain=False)

File Inputs

Pass a file from your local directory to a model by providing the model ID, version, and the filepath of your sample data:

# Generate a mapping of your local file (nyc-skyline.jpg) to the input filename the model expects
sources = {"nyc-skyline": {"image": "./images/nyc-skyline.jpg"}}

# Submit the image to v1.0.1 of an Image-based Geolocation model
job = client.jobs.submit_file("aevbu1h3yw", "1.0.1", sources)

Embedded Inputs

Convert images and other large inputs to base64 embedded data and submit to a model by providing a model ID, version number, and dictionary with one or more base64 encoded inputs:

from modzy._util import file_to_bytes

# Embed input as a string in base64
image_bytes = file_to_bytes('./images/tower-bridge.jpg')
# Prepare the source dictionary
sources = {"tower-bridge": {"image": image_bytes}}

# Submit the image to v1.0.1 of an Imaged-based Geolocation model
job = client.jobs.submit_embedded("aevbu1h3yw", "1.0.1", sources)

Inputs from Databases

Submit data from a SQL database to a model by providing a model ID, version, a SQL query, and database connection credentials:

# Add database connection and query information
db_url = "jdbc:postgresql://db.bit.io:5432/bitdotio"
db_username = DB_USER_NAME
db_password = DB_PASSWORD
db_driver = "org.postgresql.Driver"
# Select as "input.txt" becase that is the required input name for this model
db_query = "SELECT \"mailaddr\" as \"input.txt\" FROM \"user/demo_repo\".\"atl_parcel_attr\" LIMIT 10;"

# Submit the database query to v0.0.12 of a Named Entity Recognition model
job = client.jobs.submit_jdbc("a92fc413b5","0.0.12",db_url, db_username, db_password, db_driver, db_query)

Inputs from Cloud Storage

Submit data directly from your cloud storage bucket (Amazon S3, Azure Blob, NetApp StorageGrid supported) by providing a model ID, version, and storage-blob-specific parameters.

AWS S3

# Define sources dictionary with bucket and key that points to the correct file in your s3 bucket
sources = {
  "first-amazon-review": {
    "input.txt": {
      "bucket": "s3-bucket-name",
      "key": "key-to-file.txt"
    }
  }
}

AWS_ACCESS_KEY = "aws-acces-key"
AWC_SECRET_ACCESS_KEY = "aws-secret-access-key"
AWS_REGION = "us-east-1"

# Submit s3 input to v1.0.1 of a Sentiment Analysis model
job = client.jobs.submit_aws_s3("ed542963de", "1.0.1", sources, AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY, AWS_REGION)

Azure Blob Storage

# Define sources dictionary with container name and filepath that points to the correct file in your Azure Blob container
sources = {
  "first-amazon-review": {
    "input.txt": {
      "container": "azure-blob-container-name",
      "filePath": "key-to-file.txt"
    }
  }
}

AZURE_STORAGE_ACCOUNT = "Azure-Storage-Account"
AZURE_STORAGE_ACCOUNT_KEY = "cvx....ytw=="

# Submit Azure Blob input to v1.0.1 of a Sentiment Analysis model
job = client.jobs.submit_azureblob("ed542963de", "1.0.1", sources, AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_ACCOUNT_KEY)

NetApp StorageGRID

# Define sources dictionary with bucket name and key that points to the correct file in your NetApp StorageGRID bucket
sources = {
  "first-amazon-review": {
    "input.txt": {
      "bucket": "bucket-name",
      "key": "key-to-file.txt"
    }
  }
}

ACCESS_KEY = "access-key"
SECRET_ACCESS_KEY = "secret-access-key"
STORAGE_GRID_ENDPOINT = "https://endpoint.storage-grid.example"

# Submit StorageGRID input to v1.0.1 of a Sentiment Analysis model
job = client.jobs.submit_storagegrid("ed542963de", "1.0.1", sources, ACCESS_KEY, SECRET_ACCESS_KEY, STORAGE_GRID_ENDPOINT)

Getting Results

Modzy's APIs are asynchronous by nature, which means you can use the results API to query available results for all completed inference jobs at any point in time. There are two ways you might leverage this Python SDK to query results:

Block Job until it completes

This method provides a mechanism to mimic a sycnchronous API by using two different APIs subsequently and a utility function.

# Define sources dictionary with input data
sources = {"my-input": {"input.txt": "Today is a beautiful day!"}}
# Submit the text to v1.0.1 of a Sentiment Analysis model, and to make the job explainable, change explain=True
job = client.jobs.submit_text("ed542963de", "1.0.1", sources, explain=False)
# Use block until complete method to periodically ping the results API until job completes
results = client.results.block_until_complete(job, timeout=None, poll_interval=5)

Query a Job's Result

This method simply queries the results for a job at any point in time and returns the status of the job, which includes the results if the job has completed.

#  Query results for a job at any point in time
results = client.results.get(job)
#  Print the inference results
results_json = result.get_first_outputs()['results.json']
print(results_json)

Deploying Models

Deploy a model to a your private model library in Modzy

from modzy import ApiClient

# Sets BASE_URL and API_KEY values
# Best to set these as environment variables
BASE_URL = "Valid Modzy URL" # e.g., "https://trial.modzy.com"
API_KEY = "Valid Modzy API Key" # e.g., "JbFkWZMx4Ea3epIrxSgA.a2fR36fZi3sdFPoztAXT"

client = ApiClient(base_url=BASE_URL, api_key=API_KEY)

model_data = client.models.deploy(
    container_image="modzy/grpc-echo-model:1.0.0",
    model_name="Echo Model",
    model_version="0.0.1",
    sample_input_file="./test.txt",
    run_timeout="60",
    status_timeout="60",
    short_description="This model returns the same text passed through as input, similar to an 'echo.'",
    long_description="This model returns the same text passed through as input, similar to an 'echo.'",
    technical_details="This section can include any technical information abot your model. Include information about how your model was trained, any underlying architecture details, or other pertinant information an end-user would benefit from learning.",
    performance_summary="This is the performance summary."
)

print(model_data)

To use client.models.deploy() there are 4 fields that are required:

  • container_image (str): This parameter must represent a container image repository & tag name, or in other words, the string you would include after a docker pull command. For example, if you were to download this container image using docker pull modzy/grpc-echo-model:1.0.0, include just modzy/grpc-echo-model:1.0.0 for this parameter
  • model_name: The name of the model you would like to deploy
  • model_version: The version of the model you would like to deploy
  • sample_input_file: Filepath to a sample piece of data that your model is expected to process and perform inference against.

Running Inferences at the Edge

The SDK provides support for running inferences on edge devices through Modzy's Edge Client. The inference workflow is almost identical to the previously outlined workflow, and provides functionality for interacting with both Job and Inferences APIs:

Initialize Edge Client

from modzy import EdgeClient

# Initialize edge client
# Use 'localhost' for local inferences, otherwise use the device's full IP address
client = EdgeClient('localhost',55000)

Submit Inference with Job API

Modzy Edge supports text, embedded, and aws-s3 input types.

# Submit text job to Sentiment Analysis model deployed on edge device by providing a model ID, version, and raw text data, wait for completion
job = client.jobs.submit_text("ed542963de","1.0.27",{"input.txt": "this is awesome"})
# Block until results are ready
final_job_details = client.jobs.block_until_complete(job)
results = client.jobs.get_results(job)

Query Details about Inference with Job API

# get job details for a particular job
job_details = client.jobs.get_job_details(job)

# get job details for all jobs run on your Modzy Edge instance
all_job_details = client.jobs.get_all_job_details()

Submit Inference with Inference API

The SDK provides several methods for interacting with Modzy's Inference API:

  • Synchronous: This convenience method wraps two SDK methods and is optimal for use cases that require real-time or sequential results (i.e., a prediction results are needed to inform action before submitting a new inference)
  • Asynchronous: This method combines two SDK methods and is optimal for submitting large batches of data and querying results at a later time (i.e., real-time inference is not required)
  • Streaming: This method is a convenience method for running multiple synchronous inferences consecutively and allows users to submit iterable objects to be processed sequentially in real-time

Synchronous (image-based model example)

from modzy import EdgeClient
from modzy.edge import InputSource

image_bytes = open("image_path.jpg", "rb").read()
input_object = InputSource(
    key="image", # input filename defined by model author
    data=image_bytes,
) 

with EdgeClient('localhost', 55000) as client:
  inference = client.inferences.run("<model-id>", "<model-version>", input_object, explain=False, tags=None)
results = inference.result.outputs

Asynchronous (image-based model example - submit batch of images in folder)

import os
from modzy import EdgeClient
from modzy.edge import InputSource

# submit inferences
img_folder = "./images"
inferences = []
for img in os.listdir(img_folder):
  input_object = InputSource(
    key="image", # input filename defined by model author
    data=open(os.path.join(img_folder, img), 'rb').read()
  )
  with EdgeClient('localhost', 55000) as client:
    inference = client.inferences.perform_inference("<model-id>", "<model-version>", input_object, explain=False, tags=None)
  inferences.append(inference)

# query results 
with EdgeClient('localhost', 55000) as client:
  results = [client.inferences.block_until_complete(inference.identifier) for inferences in inferences]

Stream

import os
from modzy import EdgeClient
from modzy.edge import InputSource

# generate requests iterator to pass to stream method
requests = []
for img in os.listdir(img_folder):
  input_object = InputSource(
    key="image", # input filename defined by model author
    data=open(os.path.join(img_folder, img), 'rb').read()
  )
  with EdgeClient('localhost', 55000) as client:
    requests.append(client.inferences.build_inference_request("<model-id>", "<model-version>", input_object, explain=False, tags=None)) 

# submit list of inference requests to streaming API
with EdgeClient('localhost', 55000) as client:
  streaming_results = client.inferences.stream(requests)

SDK Code Examples

View examples of practical workflows:

Documentation

Modzy's SDK is built on top of the Modzy HTTP/REST API. For a full list of features and supported routes visit Python SDK on docs.modzy.com

API Reference

Feature Code Api route
Deploy new model client.models.deploy() api/models
Get all models client.models.get_all() api/models
List models client.models.get_models() api/models
Get model details client.models.get() api/models/:model-id
List models by name client.models.get_by_name() api/models
List models by tag client.tags.get_tags_and_models() api/models/tags/:tag-id
Get related models client.models.get_related() api/models/:model-id/related-models
List a model's versions client.models.get_versions() api/models/:model-id/versions
Get a version's details client.models.get_version() api/models/:model-id/versions/:version-id
Update processing engines client.models.update_processing_engines() api/resource/models
Get minimum engines client.models.get_minimum_engines() api/models/processing-engines
List tags client.tags.get_all() api/models/tags
Submit a Job (Text) client.jobs.submit_text() api/jobs
Submit a Job (Embedded) client.jobs.submit_embedded() api/jobs
Submit a Job (File) client.jobs.submit_file() api/jobs
Submit a Job (AWS S3) client.jobs.submit_aws_s3() api/jobs
Submit a Job (Azure Blob Storage) client.jobs.submit_azureblob() api/jobs
Submit a Job (NetApp StorageGRID) client.jobs.submit_storagegrid() api/jobs
Submit a Job (JDBC) client.jobs.submit_jdbc() api/jobs
Cancel job job.cancel() api/jobs/:job-id
Hold until inference is complete job.block_until_complete() api/jobs/:job-id
Get job details client.jobs.get() api/jobs/:job-id
Get results job.get_result() api/results/:job-id
Get the job history client.jobs.get_history() api/jobs/history
Submit a Job with Edge Client (Embedded) EdgeClient.jobs.submit_embedded() Python/edge/jobs
Submit a Job with Edge Client (Text) EdgeClient.jobs.submit_text() Python/edge/jobs
Submit a Job with Edge Client (AWS S3) EdgeClient.jobs.submit_aws_s3() Python/edge/jobs
Get job details with Edge Client EdgeClient.jobs.get_job_details() Python/edge/jobs
Get all job details with Edge Client EdgeClient.jobs.get_all_job_details() Python/edge/jobs
Hold until job is complete with Edge Client EdgeClient.jobs.block_until_complete() Python/edge/jobs
Get results with Edge Client EdgeClient.jobs.get_results() Python/edge/jobs
Build inference request with Edge Client EdgeClient.inferences.build_inference_request() Python/edge/inferences
Perform inference with Edge Client EdgeClient.inferences.perform_inference() Python/edge/inferences
Get inference details with Edge Client EdgeClient.inferences.get_inference_details() Python/edge/inferences
Run synchronous inferences with Edge Client EdgeClient.inferences.run() Python/edge/inferences
Hold until inference completes with Edge Client EdgeClient.inferences.block_until_complete() Python/edge/inferences
Stream inferences with Edge Client EdgeClient.inferences.stream() Python/edge/inferences

Support

For support, email [email protected] or join our Slack.

Contributing

Contributions are always welcome!

See contributing.md for ways to get started.

Please adhere to this project's code of conduct.

We are happy to receive contributions from all of our users. Check out our contributing file to learn more.

Contributor Covenant

sdk-python's People

Contributors

bmunday3 avatar caradoxical avatar cbatis avatar datasciencedeconstructed avatar dholmancoding avatar modzyopensource avatar n8mellis avatar nathan-mellis avatar raul-casallas avatar saumil-d avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sdk-python's Issues

Enhancement idea: default to latest model version when running jobs

Checklist

Please review first that the issue is fully related with this SDK by checking the relevant checkboxes ([x]).

  • I have a Modzy API Key active and have the entitlements to perform the desired action.
  • I review that have access to Modzy API host.
  • I think that is a error specific to the SDK.
  • I review the documentation and existing issues in order to not duplicate existing ones.
  • I am willing to follow-up on comments in a timely manner.

Info

  • Modzy SDK version: 0.6.0
  • Python version: 3.9.7
  • Operating System: Windows

Description

When submitting a job to any model, one of the parameters is the model version. There are ways to extract the latest version programmatically, but it would be great if the SDK defaulted to the latest version in the case a version is not supplied.

Steps to reproduce

You can get the latest model version this way in the SDK:

for model in models:
    model_info = client.models.get(models[model]['id'])
    if models[model].get('version', None) == None: #let set a version explicitly above
        models[model]['version'] = model_info.latestActiveVersion
    models[model]['name'] = model_info.name

Expected results: Default latest version extracted by SDK automatically if user does not pass version through

Actual results: Requires manual work on user currently

Error while importing modzy into google colab

Checklist

Please review first that the issue is fully related with this SDK by checking the relevant checkboxes ([x]).

  • I have a Modzy API Key active and have the entitlements to perform the desired action.
  • I review that have access to Modzy API host.
  • I think that is a error specific to the SDK.
  • I review the documentation and existing issues in order to not duplicate existing ones.
  • I am willing to follow-up on comments in a timely manner.

Info

  • Modzy SDK version: modzy-sdk-0.5.7
  • Python version: 3.7.12
  • Operating System: Windows/Google Colab

Description

Syntax error while importing modzy.

Steps to reproduce

Input: 

from modzy import ApiClient
from modzy._util import file_to_bytes

Output: 

File "/usr/local/lib/python3.7/dist-packages/modzy/_util.py", line 54
while chunk := file.read(chunk_size):
            ^
SyntaxError: invalid syntax

Expected results: To import modzy into my workspace

Actual results: Syntax Error

TLS Certificate/SSL Connection Issue

Checklist

Please review first that the issue is fully related with this SDK by checking the relevant checkboxes ([x]).

  • I have a Modzy API Key active and have the entitlements to perform the desired action.
  • I review that have access to Modzy API host.
  • I think that is a error specific to the SDK.
  • I review the documentation and existing issues in order to not duplicate existing ones.
  • I am willing to follow-up on comments in a timely manner.

Info

  • Modzy SDK version: 0.5.3
  • Python version: 3.8.5
  • Operating System: Windows

Description

I was trying to submit a job to one of our dev environments "intdev.modzy.engineering" but ran into an SSL error (error below).

Steps to reproduce

From a BAH machine, submit a job using the Python SDK using this line:

job = client.jobs.submit_files(model_id, model_version, {'input.txt': './input.txt'})

I believe it is an issue with the certifications installed/trusted onto our SDK. It would be great if there were an optional "certs" parameter to the ApiClient connection line where we could specify a root TLS certificate to validate the HTTP response against...maybe something like
client = ApiClient(base_url = URL, api_key = KEY, certs = CERTS)

The change might need to be made here https://github.com/modzy/sdk-python/blob/main/modzy/http.py#L84.

Expected results: Successful job submission.

Actual results: Error below

Traceback

~\Anaconda3\envs\modzy\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
    669             # Make the request on the httplib connection object.
--> 670             httplib_response = self._make_request(
    671                 conn,

~\Anaconda3\envs\modzy\lib\site-packages\urllib3\connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
    380         try:
--> 381             self._validate_conn(conn)
    382         except (SocketTimeout, BaseSSLError) as e:

~\Anaconda3\envs\modzy\lib\site-packages\urllib3\connectionpool.py in _validate_conn(self, conn)
    977         if not getattr(conn, "sock", None):  # AppEngine might not have  `.sock`
--> 978             conn.connect()
    979 

~\Anaconda3\envs\modzy\lib\site-packages\urllib3\connection.py in connect(self)
    361 
--> 362         self.sock = ssl_wrap_socket(
    363             sock=conn,

~\Anaconda3\envs\modzy\lib\site-packages\urllib3\util\ssl_.py in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir, key_password, ca_cert_data)
    385         if HAS_SNI and server_hostname is not None:
--> 386             return context.wrap_socket(sock, server_hostname=server_hostname)
    387 

~\Anaconda3\envs\modzy\lib\ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
    499         # ctx._wrap_socket()
--> 500         return self.sslsocket_class._create(
    501             sock=sock,

~\Anaconda3\envs\modzy\lib\ssl.py in _create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
   1039                         raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
-> 1040                     self.do_handshake()
   1041             except (OSError, ValueError):

~\Anaconda3\envs\modzy\lib\ssl.py in do_handshake(self, block)
   1308                 self.settimeout(None)
-> 1309             self._sslobj.do_handshake()
   1310         finally:

SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)

During handling of the above exception, another exception occurred:

MaxRetryError                             Traceback (most recent call last)
~\Anaconda3\envs\modzy\lib\site-packages\requests\adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
    438             if not chunked:
--> 439                 resp = conn.urlopen(
    440                     method=request.method,

~\Anaconda3\envs\modzy\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
    725 
--> 726             retries = retries.increment(
    727                 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]

~\Anaconda3\envs\modzy\lib\site-packages\urllib3\util\retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
    445         if new_retry.is_exhausted():
--> 446             raise MaxRetryError(_pool, url, error or ResponseError(cause))
    447 

MaxRetryError: HTTPSConnectionPool(host='intdev.modzy.engineering', port=443): Max retries exceeded with url: /api/models?per-page=1000 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)')))

During handling of the above exception, another exception occurred:

SSLError                                  Traceback (most recent call last)
~\Anaconda3\envs\modzy\lib\site-packages\modzy\http.py in request(self, method, url, json_data)
     83         try:
---> 84             response = self.session.request(method, url, data=data, headers=headers)
     85             self.logger.debug("response %s", response.status_code)

~\Anaconda3\envs\modzy\lib\site-packages\requests\sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
    529         send_kwargs.update(settings)
--> 530         resp = self.send(prep, **send_kwargs)
    531 

~\Anaconda3\envs\modzy\lib\site-packages\requests\sessions.py in send(self, request, **kwargs)
    642         # Send the request
--> 643         r = adapter.send(request, **kwargs)
    644 

~\Anaconda3\envs\modzy\lib\site-packages\requests\adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
    513                 # This branch is for urllib3 v1.22 and later.
--> 514                 raise SSLError(e, request=request)
    515 

SSLError: HTTPSConnectionPool(host='intdev.modzy.engineering', port=443): Max retries exceeded with url: /api/models?per-page=1000 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)')))

During handling of the above exception, another exception occurred:

NetworkError                              Traceback (most recent call last)
<ipython-input-11-31a2100395aa> in <module>
----> 1 client.models.get_all()

~\Anaconda3\envs\modzy\lib\site-packages\modzy\models.py in get_all(self)
    157         """
    158         self.logger.debug("getting all models")
--> 159         return self.get_models()
    160 
    161     def get_models(self, model_id=None, author=None, created_by_email=None, name=None, description=None,

~\Anaconda3\envs\modzy\lib\site-packages\modzy\models.py in get_models(self, model_id, author, created_by_email, name, description, is_active, is_expired, is_recommended, last_active_date_time, expiration_date_time, sort_by, direction, page, per_page)
    253         body = {k: v for (k, v) in body.items() if v is not None}
    254         self.logger.debug("body 2? %s", body)
--> 255         json_list = self._api_client.http.get('{}?{}'.format(self._base_route, urlencode(body)))
    256         return list(Model(json_obj, self._api_client) for json_obj in json_list)
    257 

~\Anaconda3\envs\modzy\lib\site-packages\modzy\http.py in get(self, url)
    121                 or the client is unable to connect.
    122         """
--> 123         return self.request('GET', url)
    124 
    125     def post(self, url, json_data=None):

~\Anaconda3\envs\modzy\lib\site-packages\modzy\http.py in request(self, method, url, json_data)
     86         except requests.exceptions.RequestException as ex:
     87             self.logger.exception('unable to make network request')
---> 88             raise NetworkError(str(ex), url, reason=ex)
     89 
     90         try:

NetworkError: HTTPSConnectionPool(host='intdev.modzy.engineering', port=443): Max retries exceeded with url: /api/models?per-page=1000 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)')))```

Request: Python Method for downloading non-text inference results

Checklist

Please review first that the issue is fully related with this SDK by checking the relevant checkboxes ([x]).

  • [x ] I have a Modzy API Key active and have the entitlements to perform the desired action.
  • [x ] I review that have access to Modzy API host.
  • [x ] I think that is a error specific to the SDK.
  • [x ] I review the documentation and existing issues in order to not duplicate existing ones.
  • [x ] I am willing to follow-up on comments in a timely manner.

Info

  • Modzy SDK version: 0.5.7
  • Python version: Python 3.8
  • Operating System: macOS Catalina (10.15.7)

Description

When you use models that produce a file output, Modzy saves that file, and then returns the location in the results object. To download that file, you need to do an authenticated GET request using cURL. It would be nice if Modzy's Python SDK had a method that automatically downloaded that file, or maybe let you send it to a specific location? Maybe something like...

client.results.download(name-of-file, file-save-location)

Just an idea.

Steps to reproduce

Paste the command(s) you ran and the output.

Expected results:

Actual results:

Traceback

Logs
Paste the logs that you consider useful for diagnostic.

Add a feature list

Checklist

Please review first that the issue is fully related with this SDK by checking the relevant checkboxes ([x]).

  • I have a Modzy API Key active and have the entitlements to perform the desired action.
  • I think that is a error specific to the SDK.
  • I review that have access to Modzy API host.
  • I review the documentation and existing issues in order to not duplicate existing ones.
  • I am willing to follow-up on comments in a timely manner.

Info

  • Modzy SDK version:

Current version?

Description

Steps to reproduce

As a user of the sdk, I want to see a list of features or API calls supported by the SDK without have to navigate into the code.

Add `timeout` parameter to `edge.inferences.run` method

Checklist

Please review first that the issue is fully related with this SDK by checking the relevant checkboxes ([x]).

  • I have a Modzy API Key active and have the entitlements to perform the desired action.
  • I review that have access to Modzy API host.
  • I think that is a error specific to the SDK.
  • I review the documentation and existing issues in order to not duplicate existing ones.
  • I am willing to follow-up on comments in a timely manner.

Info

  • Modzy SDK version: 0.11.5
  • Python version: 3.8.10
  • Operating System: Linux

Description

The edge client inferences.run method is a wrapper around inferences.perform_inference and inferences.block_until_complete. The issue is, this method does not accept a timeout parameter, so for models that take longer than 30 seconds to perform inference (default value in block_until_complete method, you have to manually use the perform inference --> block until complete methods.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.