GithubHelp home page GithubHelp logo

microsoft / onnx-server-openenclave Goto Github PK

View Code? Open in Web Editor NEW
52.0 6.0 9.0 447 KB

An Open Enclave port of the ONNX inference server with data encryption and attestation capabilities to enable confidential inference on Azure Confidential Computing.

License: MIT License

CMake 2.48% Shell 0.49% Python 2.81% C++ 15.45% C 78.76%

onnx-server-openenclave's Introduction

Confidential ONNX Inference Server

The Confidential Inferencing Beta is a collaboration between Microsoft Research, Azure Confidential Compute, Azure Machine Learning, and Microsoft’s ONNX Runtime project and is provided here As-Is as beta in order to showcase a hosting possibility which restricts the machine learning hosting party from accessing both the inferencing request and its corresponding response.

Architecture overview of confidential inference server

As part of this implementation, the secure Trusted Execution Environment (TEE) generates a private (ECDH) key which is secured within the enclave and used to decrypt incoming inference requests. The client (reference code also provided) first obtains the server's public key and the attestation report proving that the key was created by the TEE. It then completes a key exchange to derive an encryption key for its request.

Currently, the provided AKS deployment example only works on a single node, as it is required to provision the same private key to all inference enclaves to scale to multiple nodes. Developers can use the key provider interface to plug their key distribution solution, but we do not support any official implementation at this stage. We welcome open source contributions.

Overview of steps

To make this tutorial easier to follow, we first describe how to locally build and run the inference server on an Azure Confidential Computing VM, then separately describe the steps to build and deploy a container on Azure Kubernetes Service.

Setting up a local deployment on an ACC VM

The reason for running this deployment flow on an ACC VM is that during the deployment, you will be able to test the server locally. This requires Intel SGX support on the server, which is enabled on DC-series VMs from Azure Confidential Computing. In case you don't need to test the server locally, an ACC VM is not required - skip to the AKS deployment tutorial after you built the server image and Python client.

Note: The Azure subscriptions have a default of 8 cores, and the development VM would take some of them – it is recommended you use the DC2sv2 VM for the build machine with 2 cores and the rest can be used by the ACC AKS cluster.

Prepare your ONNX model

Open Neural Network Exchange (ONNX) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. Most frameworks (Pytorch, TensorFlow, etc) support converting models to ONNX.

This repository depends on an Open Enclave port of the generally available ONNX Runtime in [external/onnxruntime]. Make sure the ONNX model you use is supported by the provided runtime (in general though, there should be no issues).

Build the server image

In this step you will build a server image, which you can test locally, and eventually deploy to an AKS cluster. You will need to follow these 3 main steps:

  1. Clone this git repo.
  2. Build a generic container.
  3. Bundle the generic container with your own ONNX model.

To provide guarantees to your clients that your code is secure, you would need to provide them with the container code. Their client would match the enclave hash they generate (enclave_info.txt) with your hosted server. You can also provide the model hash, so they know which model binary was used to provide the inferencing results.

Test inference with the sample client

To use your server, your clients would need to use a proprietary protocol which will make sure the target server is secure before sending it the encrypted inferencing request. This git repository provides an open source Python library called confonnx which can be used to call the server with the proprietary protocol.

Note: During the deployment we use the command line, but the command line interface is not ideal for production. Consider using the API directly (more information hereunder) Note: The client library is also used to generate the hash of the ONNX model and create inference test data.

AKS Deployment

Once you have the server image you can run it on your VM via Docker, and via the client library you will be able to test that everything is working properly. Once you have built and tested the confidential inference server container on your VM, you are ready to deploy on an AKS cluster. Remember, without a key management solution you can only deploy on a single node

Building and testing on an ACC VM

This section describes the steps needed to build and run a confidential inference server on an Azure Confidential Compute VM. Note: The following commands were tested on an ACC VM running Ubuntu 18.04.

Provision an ACC VM

Follow the steps on how to Deploy an Azure Confidential Computing VM

Notes: You will need an empty resource group.

  • Image: Ubuntu Server 18.04 (Gen 2)
  • Choose SSH public key option
  • VM size: 1x Standard DC2s v2
  • Public inbound ports SSH(Linux)/RDP(Windows)

SSH

You can now SSH into your machine, go to the ACC VM resource in the portal, choose connect, select SSH

ssh -i <private key path> <username>@<ip address>
# <username>@accvm:~$
sudo apt update

Get the code

Clone this repository:

git clone https://github.com/microsoft/onnx-server-openenclave
cd onnx-server-openenclave

Install the Azure DCAP client

For running the inference client, the Azure DCAP Client has to be installed. (note: This requirement may be removed in a future release.) To install the Azure DCAP Client on Ubuntu 18.04, run:

echo "deb [arch=amd64] https://packages.microsoft.com/ubuntu/18.04/prod bionic main" | sudo tee /etc/apt/sources.list.d/msprod.list
wget -qO - https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
sudo apt update
sudo apt install az-dcap-client

Install Python3

Check version:

python3 --version
# Python 3.7.5

Install:

sudo apt install python3
sudo apt-get install python3-pip

Install Docker

Install Docker

sudo apt install docker.io

Build the confidential inference Python client

Build the Python package (this will take some time):

PYTHON_VERSION=3.7 docker/client/build.sh

The folder dist/Release/lib/python now contains the .whl file for the requested Python version, for example confonnx-0.1.0-cp37-cp37m-linux_x86_64.whl.

Note: manylinux wheels can be built with TYPE=manylinux, however those do not support enclave identity validation yet. The non-manylinux wheels built above should work on Ubuntu 18.04 and possibly other versions.

Install the Python wheel

Install the built library:

python3.7 -m pip install dist/Release/lib/python/confonnx-0.1.0-cp37-cp37m-linux_x86_64.whl

Build the generic server

Open enclave.conf and adjust enclave parameters as necessary:

  • Debug: Set to 0 for deployment. If left as 1, an attacker has access to the enclave memory.
  • NumTCS: Set to number of available cores in deployment VM.
  • NumHeapPages: In-enclave heap memory, increase if out-of-memory errors occur, for example with large models.

By default, an enclave signing key pair is created if it doesn't exist yet. To use your own, copy the private key to enclave.pem in the repository root.

Run the following to build the server using Docker: (It takes a while)

docker/server/build.sh

The server binaries are stored in dist/Release. In the subfolder bin/ you will also find an enclave_info.txt file. This file contains the enclave hash mrenclave that is needed for the clients to validate the enclave's identity before sending inference requests from a client.

Prepare your ONNX model

The inference server uses the ONNX Runtime and hence the model has to be converted into ONNX format first. See the ONNX Tutorials page for an overview of available converters. Make sure the target runtime (see external/onnxruntime) supports the ONNX model version.

For testing, you can download pre-trained ONNX models from the ONNX Model Zoo.

This guide will use one of the pre-trained MNIST models from the Zoo.

curl https://media.githubusercontent.com/media/onnx/models/master/vision/classification/mnist/model/mnist-7.onnx --output model.onnx

Compute the Model Hash

To ensure that inference requests are only sent to inference servers that are loaded with a specific model, we can compute the model hash and have the client verify it before sending the inferencing request. Note that this is an optional feature.

python3 -m confonnx.hash_model model.onnx --out model.hash
# 0d715376572e89832685c56a65ef1391f5f0b7dd31d61050c91ff3ecab16c032

Create the Docker Image

We are now ready to bundle the model and the server into a Docker image ready for deployment:

# Adjust model path and image name if needed.
MODEL_PATH=model.onnx IMAGE_NAME=model-server docker/server/build_image.sh 

Create Inference Test Data

Before testing the server we need some inference test data. We can use the following tool to create random data according to the model schema:

python3 -m confonnx.create_test_inputs --model model.onnx --out input.json

Test the Server Locally

Start the server with:

sudo docker run --rm --name model-server-test --device=/dev/sgx -p 8888:8888 model-server

Now we can send our first inference request:

python3 -m confonnx.main --url http://localhost:8888/ --enclave-hash "<mrenclave>" --enclave-model-hash-file model.hash --json-in input.json --json-out output.json

The inference result is stored in output.json.

Note: Add --enclave-allow-debug if Debug is set to 1 in enclave.conf.

To stop the server, run:

sudo docker stop model-server-test

Deployment to AKS

See the dedicated AKS deployment tutorial.

Using the Python client API

In the above instructions, the command line inference client was used. This client is not meant for production scenarios and offers restricted functionality.

Using the Python API directly has the following advantages:

  • Simple inference input/output format (dictionary of numpy arrays).
  • Efficient handling of multiple requests (avoiding repeated key exchanges).
  • Custom error handling.

Example:

import numpy as np
from confonnx.client import Client

client = Client('https://...', auth_key='password123', enclave_hash='<mrenclave>', enclave_model_hash='...')
result = client.predict({
  'image': np.random.random_sample((5,128,128)) # five 128x128 images
})
print(result['digit'])
# [1,6,2,1,0]

Frequently asked questions

Can multiple instances of the server be deployed for scalability?

Currently not, but support for it will be added in a future release.

Can models be protected?

The server includes experimental and undocumented options for model protection which advanced users may use at their own risk. No support is provided for these options.

Full support for model protection will come in a future release.

Can the server be tested on a non-SGX machine?

Currently not, though this is planned for a future release.

Can the Python client be built for macOS or Windows?

Currently not, though community contributions are highly welcomed to support this.

Is there a C/C++ version of the client?

The Python client is a thin wrapper around C++ code (see confonnx/client and external/confmsg). This code can be used as basis for building a custom native client.

onnx-server-openenclave's People

Contributors

ad-l avatar kapilvgit avatar letmaik avatar microsoft-github-policy-service[bot] avatar pengpeng-microsoft avatar wintersteiger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

onnx-server-openenclave's Issues

Failed to open Intel SGX device

Hi,
Is it possible to run the server on Linux machine with SGX support? (Not ACC VM)

I am trying to run it on Linux machine :
OS: Ubuntu 20.04
CPU: Intel i5 7200U
SGX SDK: 2.15

I was able to complete the setup, but getting this error when trying to start the server:
sudo docker run --rm --name model-server-test --device=/dev/sgx/provision -p 8888:8888 model-server [2022-01-21 09:47:43.469] [ServerApp] [info] Enclave path: /root/confonnx_server_enclave.signed [2022-01-21 09:47:43.469] [ServerApp] [info] Model path: /root/model.onnx [2022-01-21 09:47:43.469] [ServerApp] [info] Creating enclave [get_driver_type /home/sgx/jenkins/linux-ubuntuServer-release-build-trunk-215.1/build_target/PROD/label/Builder-UbuntuSrv18/label_exp/ubuntu64/linux-trunk-opensource/psw/urts/linux/edmm_utility.cpp:111] Failed to open Intel SGX device. 2022-01-21T09:47:43.000000Z [(H)ERROR] tid(0x7f8a50504f40) | enclave_create with ENCLAVE_TYPE_SGX1 type failed (err=0x1) (oe_result_t=OE_PLATFORM_ERROR) [../host/sgx/sgxload.c:oe_sgx_create_enclave:480] 2022-01-21T09:47:43.000000Z [(H)ERROR] tid(0x7f8a50504f40) | :OE_PLATFORM_ERROR [../host/sgx/create.c:oe_sgx_build_enclave:812] 2022-01-21T09:47:43.000000Z [(H)ERROR] tid(0x7f8a50504f40) | :OE_PLATFORM_ERROR [../host/sgx/create.c:oe_create_enclave:960] [2022-01-21 09:47:43.494] [ServerApp] [critical] ERROR (N11onnxruntime6server15EnclaveSDKErrorE): OE_PLATFORM_ERROR

Suggestion to change link for "ACC VM"

The "ACC VM" in the image below which is from the page
https://github.com/microsoft/onnx-server-openenclave
links to the Azure confidential computing solution page (https://azure.microsoft.com/en-us/solutions/confidential-compute/), not the page about deploying an ACC VM (e.g., https://docs.microsoft.com/en-us/azure/confidential-computing/quick-create-marketplace)
image

Suggestion is to change to think to point to
https://docs.microsoft.com/en-us/azure/confidential-computing/quick-create-marketplace

got CRL expiration error when launching client

Hi,
I was trying to launch onnx client, and got the following error.
it's in ubuntu 18.04 and I've installed sgx sdk 2.11 and dcap driver 1.8
Anything else I need install?
thanks,
Leo

leo@leo-Inspiron:~/onnx-server-openenclave$ python3 -m confonnx.main --url http://localhost:8888/ --enclave-hash bc07410d251920537a587ce41d11cd964efced5c0812defc90c7e1696723efd3 --enclave-model-hash-file model.hash --json-in input.json --json-out output.json --enclave-allow-debug
STEP 1: Establishing encrypted & attested connection with enclave
Sending request (0.1 KiB)
Received response after 31.9 ms (4.8 KiB)
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | X509_verify_cert failed!
error: (12) CRL has expired
(oe_result_t=OE_VERIFY_CRL_EXPIRED) [../host/crypto/openssl/cert.c:_verify_cert:361]
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | :OE_VERIFY_CRL_EXPIRED [../host/crypto/openssl/cert.c:oe_cert_verify:730]
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | Failed to verify leaf certificate. OE_VERIFY_CRL_EXPIRED (oe_result_t=OE_VERIFY_CRL_EXPIRED) [../common/sgx/collateral.c:oe_validate_revocation_list:323]
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | :OE_INVALID_PARAMETER [../host/crypto/openssl/cert.c:oe_cert_free:574]
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | Failed to validate revocation info. OE_VERIFY_CRL_EXPIRED (oe_result_t=OE_VERIFY_CRL_EXPIRED) [../common/sgx/quote.c:oe_get_sgx_quote_validity:666]
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | Failed to validate quote. OE_VERIFY_CRL_EXPIRED (oe_result_t=OE_VERIFY_CRL_EXPIRED) [../common/sgx/quote.c:oe_verify_quote_with_sgx_endorsements:506]
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | :OE_VERIFY_CRL_EXPIRED [../common/sgx/quote.c:oe_verify_sgx_quote:471]
2020-10-19T05:34:47.000000Z [(H)ERROR] tid(0x7f28601b1740) | :OE_VERIFY_CRL_EXPIRED [../host/sgx/hostverify_report.c:oe_verify_remote_report:46]
ERROR: Enclave quote invalid

Create Inference Data

While running this command python3 -m confonnx.create_test_inputs --model model.onnx --out input.json

I get the error File "/home/ai/ben/final/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from model.onnx failed:Protobuf parsing failed.

Error when running client

Hi,

I tried to recreate the steps from the readme and got an error when sending the inference request.

I'm running on Ubuntu 18.04.5. Other than following the steps from the readme I had to install libprotoc-dev and python3.7-dev.

This is what happens when I get the error:

user@host$ python3.7 -m confonnx.main --url http://localhost:8888/ --enclave-hash "<HASH>" --enclave-model-hash-file model.hash --json-in input.json --json-out output.json --enclave-allow-debug
Traceback (most recent call last):
  File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/user/.local/lib/python3.7/site-packages/confonnx/main.py", line 17, in <module>
    from confonnx.client import Client
  File "/home/user/.local/lib/python3.7/site-packages/confonnx/client.py", line 12, in <module>
    import confonnx.predict_pb2 as predict_pb2
  File "/home/user/.local/lib/python3.7/site-packages/confonnx/predict_pb2.py", line 17, in <module>
    import confonnx.onnx_ml_pb2 as onnx__ml__pb2
  File "/home/user/.local/lib/python3.7/site-packages/confonnx/onnx_ml_pb2.py", line 23, in <module>
    serialized_pb=_b('\n\ronnx-ml.proto\x12\x04onnx\"\xe0\x03\n\x0e\x41ttributeProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x15\n\rref_attr_name\x18\x15 \x01(\t\x12\x12\n\ndoc_string\x18\r \x01(\t\x12\x30\n\x04type\x18\x14 \x01(\x0e\x32\".onnx.AttributeProto.AttributeType\x12\t\n\x01\x66\x18\x02 \x01(\x02\x12\t\n\x01i\x18\x03 \x01(\x03\x12\t\n\x01s\x18\x04 \x01(\x0c\x12\x1c\n\x01t\x18\x05 \x01(\x0b\x32\x11.onnx.TensorProto\x12\x1b\n\x01g\x18\x06 \x01(\x0b\x32\x10.onnx.GraphProto\x12\x0e\n\x06\x66loats\x18\x07 \x03(\x02\x12\x0c\n\x04ints\x18\x08 \x03(\x03\x12\x0f\n\x07strings\x18\t \x03(\x0c\x12\"\n\x07tensors\x18\n \x03(\x0b\x32\x11.onnx.TensorProto\x12 \n\x06graphs\x18\x0b \x03(\x0b\x32\x10.onnx.GraphProto\"\x91\x01\n\rAttributeType\x12\r\n\tUNDEFINED\x10\x00\x12\t\n\x05\x46LOAT\x10\x01\x12\x07\n\x03INT\x10\x02\x12\n\n\x06STRING\x10\x03\x12\n\n\x06TENSOR\x10\x04\x12\t\n\x05GRAPH\x10\x05\x12\n\n\x06\x46LOATS\x10\x06\x12\x08\n\x04INTS\x10\x07\x12\x0b\n\x07STRINGS\x10\x08\x12\x0b\n\x07TENSORS\x10\t\x12\n\n\x06GRAPHS\x10\n\"Q\n\x0eValueInfoProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x1d\n\x04type\x18\x02 \x01(\x0b\x32\x0f.onnx.TypeProto\x12\x12\n\ndoc_string\x18\x03 \x01(\t\"\x96\x01\n\tNodeProto\x12\r\n\x05input\x18\x01 \x03(\t\x12\x0e\n\x06output\x18\x02 \x03(\t\x12\x0c\n\x04name\x18\x03 \x01(\t\x12\x0f\n\x07op_type\x18\x04 \x01(\t\x12\x0e\n\x06\x64omain\x18\x07 \x01(\t\x12\'\n\tattribute\x18\x05 \x03(\x0b\x32\x14.onnx.AttributeProto\x12\x12\n\ndoc_string\x18\x06 \x01(\t\"\xbb\x02\n\nModelProto\x12\x12\n\nir_version\x18\x01 \x01(\x03\x12.\n\x0copset_import\x18\x08 \x03(\x0b\x32\x18.onnx.OperatorSetIdProto\x12\x15\n\rproducer_name\x18\x02 \x01(\t\x12\x18\n\x10producer_version\x18\x03 \x01(\t\x12\x0e\n\x06\x64omain\x18\x04 \x01(\t\x12\x15\n\rmodel_version\x18\x05 \x01(\x03\x12\x12\n\ndoc_string\x18\x06 \x01(\t\x12\x1f\n\x05graph\x18\x07 \x01(\x0b\x32\x10.onnx.GraphProto\x12&\n\tfunctions\x18\x64 \x03(\x0b\x32\x13.onnx.FunctionProto\x12\x34\n\x0emetadata_props\x18\x0e \x03(\x0b\x32\x1c.onnx.StringStringEntryProto\"4\n\x16StringStringEntryProto\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t\"k\n\x10TensorAnnotation\x12\x13\n\x0btensor_name\x18\x01 \x01(\t\x12\x42\n\x1cquant_parameter_tensor_names\x18\x02 \x03(\x0b\x32\x1c.onnx.StringStringEntryProto\"\xa3\x02\n\nGraphProto\x12\x1d\n\x04node\x18\x01 \x03(\x0b\x32\x0f.onnx.NodeProto\x12\x0c\n\x04name\x18\x02 \x01(\t\x12&\n\x0binitializer\x18\x05 \x03(\x0b\x32\x11.onnx.TensorProto\x12\x12\n\ndoc_string\x18\n \x01(\t\x12#\n\x05input\x18\x0b \x03(\x0b\x32\x14.onnx.ValueInfoProto\x12$\n\x06output\x18\x0c \x03(\x0b\x32\x14.onnx.ValueInfoProto\x12(\n\nvalue_info\x18\r \x03(\x0b\x32\x14.onnx.ValueInfoProto\x12\x37\n\x17quantization_annotation\x18\x0e \x03(\x0b\x32\x16.onnx.TensorAnnotation\"\xb8\x05\n\x0bTensorProto\x12\x0c\n\x04\x64ims\x18\x01 \x03(\x03\x12\x11\n\tdata_type\x18\x02 \x01(\x05\x12*\n\x07segment\x18\x03 \x01(\x0b\x32\x19.onnx.TensorProto.Segment\x12\x16\n\nfloat_data\x18\x04 \x03(\x02\x42\x02\x10\x01\x12\x16\n\nint32_data\x18\x05 \x03(\x05\x42\x02\x10\x01\x12\x13\n\x0bstring_data\x18\x06 \x03(\x0c\x12\x16\n\nint64_data\x18\x07 \x03(\x03\x42\x02\x10\x01\x12\x0c\n\x04name\x18\x08 \x01(\t\x12\x12\n\ndoc_string\x18\x0c \x01(\t\x12\x10\n\x08raw_data\x18\t \x01(\x0c\x12\x33\n\rexternal_data\x18\r \x03(\x0b\x32\x1c.onnx.StringStringEntryProto\x12\x35\n\rdata_location\x18\x0e \x01(\x0e\x32\x1e.onnx.TensorProto.DataLocation\x12\x17\n\x0b\x64ouble_data\x18\n \x03(\x01\x42\x02\x10\x01\x12\x17\n\x0buint64_data\x18\x0b \x03(\x04\x42\x02\x10\x01\x1a%\n\x07Segment\x12\r\n\x05\x62\x65gin\x18\x01 \x01(\x03\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x03\"\xda\x01\n\x08\x44\x61taType\x12\r\n\tUNDEFINED\x10\x00\x12\t\n\x05\x46LOAT\x10\x01\x12\t\n\x05UINT8\x10\x02\x12\x08\n\x04INT8\x10\x03\x12\n\n\x06UINT16\x10\x04\x12\t\n\x05INT16\x10\x05\x12\t\n\x05INT32\x10\x06\x12\t\n\x05INT64\x10\x07\x12\n\n\x06STRING\x10\x08\x12\x08\n\x04\x42OOL\x10\t\x12\x0b\n\x07\x46LOAT16\x10\n\x12\n\n\x06\x44OUBLE\x10\x0b\x12\n\n\x06UINT32\x10\x0c\x12\n\n\x06UINT64\x10\r\x12\r\n\tCOMPLEX64\x10\x0e\x12\x0e\n\nCOMPLEX128\x10\x0f\x12\x0c\n\x08\x42\x46LOAT16\x10\x10\")\n\x0c\x44\x61taLocation\x12\x0b\n\x07\x44\x45\x46\x41ULT\x10\x00\x12\x0c\n\x08\x45XTERNAL\x10\x01\"\x95\x01\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x01 \x03(\x0b\x32 .onnx.TensorShapeProto.Dimension\x1aR\n\tDimension\x12\x13\n\tdim_value\x18\x01 \x01(\x03H\x00\x12\x13\n\tdim_param\x18\x02 \x01(\tH\x00\x12\x12\n\ndenotation\x18\x03 \x01(\tB\x07\n\x05value\"\xc2\x04\n\tTypeProto\x12-\n\x0btensor_type\x18\x01 \x01(\x0b\x32\x16.onnx.TypeProto.TensorH\x00\x12\x31\n\rsequence_type\x18\x04 \x01(\x0b\x32\x18.onnx.TypeProto.SequenceH\x00\x12\'\n\x08map_type\x18\x05 \x01(\x0b\x32\x13.onnx.TypeProto.MapH\x00\x12-\n\x0bopaque_type\x18\x07 \x01(\x0b\x32\x16.onnx.TypeProto.OpaqueH\x00\x12:\n\x12sparse_tensor_type\x18\x08 \x01(\x0b\x32\x1c.onnx.TypeProto.SparseTensorH\x00\x12\x12\n\ndenotation\x18\x06 \x01(\t\x1a\x42\n\x06Tensor\x12\x11\n\telem_type\x18\x01 \x01(\x05\x12%\n\x05shape\x18\x02 \x01(\x0b\x32\x16.onnx.TensorShapeProto\x1a.\n\x08Sequence\x12\"\n\telem_type\x18\x01 \x01(\x0b\x32\x0f.onnx.TypeProto\x1a<\n\x03Map\x12\x10\n\x08key_type\x18\x01 \x01(\x05\x12#\n\nvalue_type\x18\x02 \x01(\x0b\x32\x0f.onnx.TypeProto\x1a&\n\x06Opaque\x12\x0e\n\x06\x64omain\x18\x01 \x01(\t\x12\x0c\n\x04name\x18\x02 \x01(\t\x1aH\n\x0cSparseTensor\x12\x11\n\telem_type\x18\x01 \x01(\x05\x12%\n\x05shape\x18\x02 \x01(\x0b\x32\x16.onnx.TensorShapeProtoB\x07\n\x05value\"5\n\x12OperatorSetIdProto\x12\x0e\n\x06\x64omain\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\x03\"\xbf\x01\n\rFunctionProto\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x15\n\rsince_version\x18\x02 \x01(\x03\x12$\n\x06status\x18\x03 \x01(\x0e\x32\x14.onnx.OperatorStatus\x12\r\n\x05input\x18\x04 \x03(\t\x12\x0e\n\x06output\x18\x05 \x03(\t\x12\x11\n\tattribute\x18\x06 \x03(\t\x12\x1d\n\x04node\x18\x07 \x03(\x0b\x32\x0f.onnx.NodeProto\x12\x12\n\ndoc_string\x18\x08 \x01(\t*\x97\x01\n\x07Version\x12\x12\n\x0e_START_VERSION\x10\x00\x12\x19\n\x15IR_VERSION_2017_10_10\x10\x01\x12\x19\n\x15IR_VERSION_2017_10_30\x10\x02\x12\x18\n\x14IR_VERSION_2017_11_3\x10\x03\x12\x18\n\x14IR_VERSION_2019_1_22\x10\x04\x12\x0e\n\nIR_VERSION\x10\x05*.\n\x0eOperatorStatus\x12\x10\n\x0c\x45XPERIMENTAL\x10\x00\x12\n\n\x06STABLE\x10\x01\x62\x06proto3')
  File "/home/user/.local/lib/python3.7/site-packages/google/protobuf/descriptor.py", line 965, in __new__
    return _message.default_pool.AddSerializedFile(serialized_pb)
TypeError: Couldn't build proto file into descriptor pool!
Invalid proto descriptor for file "onnx-ml.proto":
  onnx.AttributeProto.name: "onnx.AttributeProto.name" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.ref_attr_name: "onnx.AttributeProto.ref_attr_name" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.doc_string: "onnx.AttributeProto.doc_string" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.type: "onnx.AttributeProto.type" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.f: "onnx.AttributeProto.f" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.i: "onnx.AttributeProto.i" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.s: "onnx.AttributeProto.s" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.t: "onnx.AttributeProto.t" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.g: "onnx.AttributeProto.g" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.floats: "onnx.AttributeProto.floats" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.ints: "onnx.AttributeProto.ints" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.strings: "onnx.AttributeProto.strings" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.tensors: "onnx.AttributeProto.tensors" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.graphs: "onnx.AttributeProto.graphs" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.UNDEFINED: "onnx.AttributeProto.UNDEFINED" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.UNDEFINED: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "UNDEFINED" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.FLOAT: "onnx.AttributeProto.FLOAT" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.FLOAT: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "FLOAT" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.INT: "onnx.AttributeProto.INT" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.INT: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "INT" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.STRING: "onnx.AttributeProto.STRING" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.STRING: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "STRING" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.TENSOR: "onnx.AttributeProto.TENSOR" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.TENSOR: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "TENSOR" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.GRAPH: "onnx.AttributeProto.GRAPH" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.GRAPH: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "GRAPH" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.FLOATS: "onnx.AttributeProto.FLOATS" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.FLOATS: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "FLOATS" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.INTS: "onnx.AttributeProto.INTS" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.INTS: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "INTS" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.STRINGS: "onnx.AttributeProto.STRINGS" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.STRINGS: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "STRINGS" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.TENSORS: "onnx.AttributeProto.TENSORS" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.TENSORS: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "TENSORS" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.GRAPHS: "onnx.AttributeProto.GRAPHS" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto.GRAPHS: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "GRAPHS" must be unique within "onnx.AttributeProto", not just within "AttributeType".
  onnx.AttributeProto.AttributeType: "onnx.AttributeProto.AttributeType" is already defined in file "onnx/onnx-ml.proto".
  onnx.AttributeProto: "onnx.AttributeProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.ValueInfoProto.name: "onnx.ValueInfoProto.name" is already defined in file "onnx/onnx-ml.proto".
  onnx.ValueInfoProto.type: "onnx.ValueInfoProto.type" is already defined in file "onnx/onnx-ml.proto".
  onnx.ValueInfoProto.doc_string: "onnx.ValueInfoProto.doc_string" is already defined in file "onnx/onnx-ml.proto".
  onnx.ValueInfoProto: "onnx.ValueInfoProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto.input: "onnx.NodeProto.input" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto.output: "onnx.NodeProto.output" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto.name: "onnx.NodeProto.name" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto.op_type: "onnx.NodeProto.op_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto.domain: "onnx.NodeProto.domain" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto.attribute: "onnx.NodeProto.attribute" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto.doc_string: "onnx.NodeProto.doc_string" is already defined in file "onnx/onnx-ml.proto".
  onnx.NodeProto: "onnx.NodeProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.ir_version: "onnx.ModelProto.ir_version" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.opset_import: "onnx.ModelProto.opset_import" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.producer_name: "onnx.ModelProto.producer_name" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.producer_version: "onnx.ModelProto.producer_version" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.domain: "onnx.ModelProto.domain" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.model_version: "onnx.ModelProto.model_version" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.doc_string: "onnx.ModelProto.doc_string" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.graph: "onnx.ModelProto.graph" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto.metadata_props: "onnx.ModelProto.metadata_props" is already defined in file "onnx/onnx-ml.proto".
  onnx.ModelProto: "onnx.ModelProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.StringStringEntryProto.key: "onnx.StringStringEntryProto.key" is already defined in file "onnx/onnx-ml.proto".
  onnx.StringStringEntryProto.value: "onnx.StringStringEntryProto.value" is already defined in file "onnx/onnx-ml.proto".
  onnx.StringStringEntryProto: "onnx.StringStringEntryProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorAnnotation.tensor_name: "onnx.TensorAnnotation.tensor_name" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorAnnotation.quant_parameter_tensor_names: "onnx.TensorAnnotation.quant_parameter_tensor_names" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorAnnotation: "onnx.TensorAnnotation" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.node: "onnx.GraphProto.node" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.name: "onnx.GraphProto.name" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.initializer: "onnx.GraphProto.initializer" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.doc_string: "onnx.GraphProto.doc_string" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.input: "onnx.GraphProto.input" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.output: "onnx.GraphProto.output" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.value_info: "onnx.GraphProto.value_info" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto.quantization_annotation: "onnx.GraphProto.quantization_annotation" is already defined in file "onnx/onnx-ml.proto".
  onnx.GraphProto: "onnx.GraphProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.dims: "onnx.TensorProto.dims" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.data_type: "onnx.TensorProto.data_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.segment: "onnx.TensorProto.segment" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.float_data: "onnx.TensorProto.float_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.int32_data: "onnx.TensorProto.int32_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.string_data: "onnx.TensorProto.string_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.int64_data: "onnx.TensorProto.int64_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.name: "onnx.TensorProto.name" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.doc_string: "onnx.TensorProto.doc_string" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.raw_data: "onnx.TensorProto.raw_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.external_data: "onnx.TensorProto.external_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.data_location: "onnx.TensorProto.data_location" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.double_data: "onnx.TensorProto.double_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.uint64_data: "onnx.TensorProto.uint64_data" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.Segment.begin: "onnx.TensorProto.Segment.begin" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.Segment.end: "onnx.TensorProto.Segment.end" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.Segment: "onnx.TensorProto.Segment" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.UNDEFINED: "onnx.TensorProto.UNDEFINED" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.UNDEFINED: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "UNDEFINED" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.FLOAT: "onnx.TensorProto.FLOAT" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.FLOAT: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "FLOAT" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.UINT8: "onnx.TensorProto.UINT8" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.UINT8: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "UINT8" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.INT8: "onnx.TensorProto.INT8" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.INT8: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "INT8" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.UINT16: "onnx.TensorProto.UINT16" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.UINT16: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "UINT16" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.INT16: "onnx.TensorProto.INT16" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.INT16: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "INT16" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.INT32: "onnx.TensorProto.INT32" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.INT32: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "INT32" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.INT64: "onnx.TensorProto.INT64" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.INT64: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "INT64" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.STRING: "onnx.TensorProto.STRING" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.STRING: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "STRING" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.BOOL: "onnx.TensorProto.BOOL" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.BOOL: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "BOOL" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.FLOAT16: "onnx.TensorProto.FLOAT16" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.FLOAT16: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "FLOAT16" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.DOUBLE: "onnx.TensorProto.DOUBLE" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.DOUBLE: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "DOUBLE" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.UINT32: "onnx.TensorProto.UINT32" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.UINT32: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "UINT32" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.UINT64: "onnx.TensorProto.UINT64" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.UINT64: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "UINT64" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.COMPLEX64: "onnx.TensorProto.COMPLEX64" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.COMPLEX64: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "COMPLEX64" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.COMPLEX128: "onnx.TensorProto.COMPLEX128" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.COMPLEX128: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "COMPLEX128" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.BFLOAT16: "onnx.TensorProto.BFLOAT16" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.BFLOAT16: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "BFLOAT16" must be unique within "onnx.TensorProto", not just within "DataType".
  onnx.TensorProto.DataType: "onnx.TensorProto.DataType" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.DEFAULT: "onnx.TensorProto.DEFAULT" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.DEFAULT: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "DEFAULT" must be unique within "onnx.TensorProto", not just within "DataLocation".
  onnx.TensorProto.EXTERNAL: "onnx.TensorProto.EXTERNAL" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto.EXTERNAL: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "EXTERNAL" must be unique within "onnx.TensorProto", not just within "DataLocation".
  onnx.TensorProto.DataLocation: "onnx.TensorProto.DataLocation" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorProto: "onnx.TensorProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorShapeProto.dim: "onnx.TensorShapeProto.dim" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorShapeProto.Dimension.value: "onnx.TensorShapeProto.Dimension.value" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorShapeProto.Dimension.dim_value: "onnx.TensorShapeProto.Dimension.dim_value" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorShapeProto.Dimension.dim_param: "onnx.TensorShapeProto.Dimension.dim_param" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorShapeProto.Dimension.denotation: "onnx.TensorShapeProto.Dimension.denotation" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorShapeProto.Dimension: "onnx.TensorShapeProto.Dimension" is already defined in file "onnx/onnx-ml.proto".
  onnx.TensorShapeProto: "onnx.TensorShapeProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.value: "onnx.TypeProto.value" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.tensor_type: "onnx.TypeProto.tensor_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.sequence_type: "onnx.TypeProto.sequence_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.map_type: "onnx.TypeProto.map_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.opaque_type: "onnx.TypeProto.opaque_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.sparse_tensor_type: "onnx.TypeProto.sparse_tensor_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.denotation: "onnx.TypeProto.denotation" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Tensor.elem_type: "onnx.TypeProto.Tensor.elem_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Tensor.shape: "onnx.TypeProto.Tensor.shape" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Tensor: "onnx.TypeProto.Tensor" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Sequence.elem_type: "onnx.TypeProto.Sequence.elem_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Sequence: "onnx.TypeProto.Sequence" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Map.key_type: "onnx.TypeProto.Map.key_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Map.value_type: "onnx.TypeProto.Map.value_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Map: "onnx.TypeProto.Map" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Opaque.domain: "onnx.TypeProto.Opaque.domain" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Opaque.name: "onnx.TypeProto.Opaque.name" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.Opaque: "onnx.TypeProto.Opaque" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.SparseTensor.elem_type: "onnx.TypeProto.SparseTensor.elem_type" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.SparseTensor.shape: "onnx.TypeProto.SparseTensor.shape" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto.SparseTensor: "onnx.TypeProto.SparseTensor" is already defined in file "onnx/onnx-ml.proto".
  onnx.TypeProto: "onnx.TypeProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.OperatorSetIdProto.domain: "onnx.OperatorSetIdProto.domain" is already defined in file "onnx/onnx-ml.proto".
  onnx.OperatorSetIdProto.version: "onnx.OperatorSetIdProto.version" is already defined in file "onnx/onnx-ml.proto".
  onnx.OperatorSetIdProto: "onnx.OperatorSetIdProto" is already defined in file "onnx/onnx-ml.proto".
  onnx.FunctionProto.name: "onnx.FunctionProto.name" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto.since_version: "onnx.FunctionProto.since_version" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto.status: "onnx.FunctionProto.status" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto.input: "onnx.FunctionProto.input" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto.output: "onnx.FunctionProto.output" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto.attribute: "onnx.FunctionProto.attribute" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto.node: "onnx.FunctionProto.node" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto.doc_string: "onnx.FunctionProto.doc_string" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.FunctionProto: "onnx.FunctionProto" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx._START_VERSION: "onnx._START_VERSION" is already defined in file "onnx/onnx-ml.proto".
  onnx._START_VERSION: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "_START_VERSION" must be unique within "onnx", not just within "Version".
  onnx.IR_VERSION_2017_10_10: "onnx.IR_VERSION_2017_10_10" is already defined in file "onnx/onnx-ml.proto".
  onnx.IR_VERSION_2017_10_10: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "IR_VERSION_2017_10_10" must be unique within "onnx", not just within "Version".
  onnx.IR_VERSION_2017_10_30: "onnx.IR_VERSION_2017_10_30" is already defined in file "onnx/onnx-ml.proto".
  onnx.IR_VERSION_2017_10_30: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "IR_VERSION_2017_10_30" must be unique within "onnx", not just within "Version".
  onnx.IR_VERSION_2017_11_3: "onnx.IR_VERSION_2017_11_3" is already defined in file "onnx/onnx-ml.proto".
  onnx.IR_VERSION_2017_11_3: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "IR_VERSION_2017_11_3" must be unique within "onnx", not just within "Version".
  onnx.IR_VERSION_2019_1_22: "onnx.IR_VERSION_2019_1_22" is already defined in file "onnx/onnx-ml.proto".
  onnx.IR_VERSION_2019_1_22: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "IR_VERSION_2019_1_22" must be unique within "onnx", not just within "Version".
  onnx.IR_VERSION: "onnx.IR_VERSION" is already defined in file "onnx/onnx-ml.proto".
  onnx.IR_VERSION: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "IR_VERSION" must be unique within "onnx", not just within "Version".
  onnx.Version: "onnx.Version" is already defined in file "onnx/onnx-ml.proto".
  onnx.EXPERIMENTAL: "onnx.EXPERIMENTAL" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.EXPERIMENTAL: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "EXPERIMENTAL" must be unique within "onnx", not just within "OperatorStatus".
  onnx.STABLE: "onnx.STABLE" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.STABLE: Note that enum values use C++ scoping rules, meaning that enum values are siblings of their type, not children of it.  Therefore, "STABLE" must be unique within "onnx", not just within "OperatorStatus".
  onnx.OperatorStatus: "onnx.OperatorStatus" is already defined in file "onnx/onnx-operators-ml.proto".
  onnx.AttributeProto.type: "onnx.AttributeProto.AttributeType" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.AttributeProto.t: "onnx.TensorProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.AttributeProto.g: "onnx.GraphProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.AttributeProto.tensors: "onnx.TensorProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.AttributeProto.graphs: "onnx.GraphProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.ValueInfoProto.type: "onnx.TypeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.NodeProto.attribute: "onnx.AttributeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.ModelProto.opset_import: "onnx.OperatorSetIdProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.ModelProto.graph: "onnx.GraphProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.ModelProto.functions: "onnx.FunctionProto" seems to be defined in "onnx/onnx-operators-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.ModelProto.metadata_props: "onnx.StringStringEntryProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TensorAnnotation.quant_parameter_tensor_names: "onnx.StringStringEntryProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.GraphProto.node: "onnx.NodeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.GraphProto.initializer: "onnx.TensorProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.GraphProto.input: "onnx.ValueInfoProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.GraphProto.output: "onnx.ValueInfoProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.GraphProto.value_info: "onnx.ValueInfoProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.GraphProto.quantization_annotation: "onnx.TensorAnnotation" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TensorProto.segment: "onnx.TensorProto.Segment" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TensorProto.external_data: "onnx.StringStringEntryProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TensorProto.data_location: "onnx.TensorProto.DataLocation" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TensorShapeProto.dim: "onnx.TensorShapeProto.Dimension" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.Tensor.shape: "onnx.TensorShapeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.Sequence.elem_type: "onnx.TypeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.Map.value_type: "onnx.TypeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.SparseTensor.shape: "onnx.TensorShapeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.tensor_type: "onnx.TypeProto.Tensor" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.sequence_type: "onnx.TypeProto.Sequence" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.map_type: "onnx.TypeProto.Map" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.opaque_type: "onnx.TypeProto.Opaque" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.TypeProto.sparse_tensor_type: "onnx.TypeProto.SparseTensor" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.FunctionProto.status: "onnx.OperatorStatus" seems to be defined in "onnx/onnx-operators-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.
  onnx.FunctionProto.node: "onnx.NodeProto" seems to be defined in "onnx/onnx-ml.proto", which is not imported by "onnx-ml.proto".  To use it here, please add the necessary import.

Any help with getting this to work would be appreciated.

Model size constraints

Hi,
Is there any constraints regarding the model size that can be used for inferencing inside the enclave?

CMake issue when building the confidential inference Python client

When building the python package (line PYTHON_VERSION=3.7 docker/client/build.sh) I was encountering a CMake failure, as the link for Boost was not working. Upon looking into it, the line

URL http://dl.bintray.com/boostorg/release/${BOOST_REQUESTED_VERSION}/source/boost_${BOOST_REQUESTED_VERSION_UNDERSCORE}.tar.bz2

should be replaced to

URL https://boostorg.jfrog.io/artifactory/main/release/${BOOST_REQUESTED_VERSION}/source/boost_${BOOST_REQUESTED_VERSION_UNDERSCORE}.tar.bz2

in cmake/get_boost.cmake.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.