Comments (3)
Hi @jrash33 glad to hear this repo helped you. Coming to your question, there is already an inference RESTAPI code in inferencing/saved_model_inference.py
inside there is a method called detect_mask_single_image_using_restapi()
. This method sends a requests and gets a JSON response. You can make changes according to your needs.
from mrcnn_serving_ready.
Sorry, I do not have any other version that sends requests using a b64 encoded image. The restapi serving model seems to take only list as an input. I do not have any solution as of now but I can tell you two workaround which can help you:
- (Easy) Make a flask server to work as a medium between your client and serving model. Make GRPC request from your Flask to your serving model as it is much faster.
- (Hard) Before converting the h5 model, create and attach a training head that accepts image in base64 as part of your model. Then convert your model to serving model. You can look in this AOCR repo which accepts base64 as image, does all the conversion internally inside a model.
Unfortunately, I do not have time to do all this. Do let me know if you find any other way.
from mrcnn_serving_ready.
hey @bendangnuksung, wow thank you so much for the reply. Much appreciated. Do you have any versions that can accept b64 encoded images as a single input? I'm trying to pass in images through your method to gcp ai platform and the images seem to be too large. Thanks again!
from mrcnn_serving_ready.
Related Issues (20)
- Frozen graph generates difference results compared to the original model HOT 2
- there is not any files in the folder about serving_model/1/variables. HOT 12
- Which .pb file should I use? The one in serving model or the one in frozen model? HOT 2
- How to use saved_model.pb in Tensorflow Model Server? HOT 8
- Can not convert to tflite (UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbb in position 3: invalid start byte) HOT 3
- Converting frozen graph in mvNCCompile
- output_names=[out.op.name for out in model.outputs][:4] should be modified to get the 5 loss nodes HOT 2
- What should the inputs and outputs of the SavedModel SignatureDef look like? HOT 2
- About How to predict by tensorflow serving. HOT 2
- How can I read input and output node from frozen inference model (.pb)? HOT 2
- Inference speed really slow! HOT 2
- Rest Api Error
- { "error": "Malformed request: GET /v1/models/mask:predict" } HOT 1
- Unable to convert to tensorflow serving HOT 2
- Deployment on serving failed
- Error help pleas
- gRPC error HOT 2
- error while converting keras model to tensorflow saved model format
- Error to run main.py file
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mrcnn_serving_ready.