Comments (3)
Actually we has the Python grpc server in this project https://github.com/tobegit3hub/tensorflow_template_application .
But I do not think the Python grpc server is better than the one in C++ which is how TensorFlow Serving works. So we may not add Python grpc server in this project.
But you can implement the simple one like this project does.
from simple_tensorflow_serving.
Actually we has the Python grpc server in this project https://github.com/tobegit3hub/tensorflow_template_application .
But I do not think the Python grpc server is better than the one in C++ which is how TensorFlow Serving works. So we may not add Python grpc server in this project.
But you can implement the simple one like this project does.
Thanks for your reply, I have implemented a grpc version using simple_tensorflow_serving. This is mainly because I want to use the features of custom op, however, tensorflow serving is not very well implemented.
from simple_tensorflow_serving.
After using custom ops, the speed of inference is 1.5 times faster than tensorflow serving. This op is fastertransformer,which implemented by Nvidia. I tried many ways to use it as a custom op in the tensorflow serving but failed. If I can add this op to tensorflow serving, the inference speed should be faster. Have a good weekend.
from simple_tensorflow_serving.
Related Issues (20)
- how can I shutdown the model which I don't want to use? HOT 2
- ImportError: No module named simple_tensorflow_serving.server HOT 1
- issue on serving mxnet model
- TypeError: a bytes-like object is required, not 'str' HOT 2
- docker python 3.7 HOT 2
- Postprocessing function in tensorflow models HOT 2
- Loading and Unloading versions of model HOT 2
- loaded_function = marshal.loads(preprocess_function_string) ValueError: bad marshal data (unknown type code) 2019-12-11 13:50:48 INFO 172.16.4.73 - - [11/Dec/2019 13:50:48] "GET / HTTP/1.1" 500
- batching feature
- block when enable_ssl is False HOT 1
- tf.GPUOptions bug for Python3.7 HOT 3
- What is the difference between it and tensorflow serving? What are the advantages? Where can I see its benchmarks about inference performance test?? HOT 2
- Problem with Docker? HOT 1
- if I will deploy a tf 1.x version SavedModel how to do ? HOT 1
- GPU docker with authentication HOT 1
- docker run with models_config.json HOT 1
- How can I solve this error: KeyError: u'BatchMatMulV2', model: BERT; tensorflow: 1.14. HOT 1
- my model can be loaded but when I send request to do the predict I got some errors HOT 2
- TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from simple_tensorflow_serving.