reneweb / react-native-tensorflow Goto Github PK
View Code? Open in Web Editor NEWA TensorFlow inference library for react native
License: Apache License 2.0
A TensorFlow inference library for react native
License: Apache License 2.0
What is the version of tensorflow used to generate pb file?
I am using tensorflow version 1.7.1 with react-native-tensorflow version 0.1.8 and getting error as stated in title.
I'm unable to get the *.pb file working in release mode. Everything seems to work while running in debug mode on a simulator but when deploying in release mode to simulator or device I get the following error:
Error: Couldn't find 'file://var/.. ../assets/rounded_graph.pb' in bundle
I am using the example project from the repo with little changes outside of pointing to my model/label files and passing in appropriate params.
const ir = new TfImageRecognition({
model:require('./assets/rounded_graph.pb'),
labels: require('./assets/retrained_labels.txt'),
imageMean: 128, // Optional, defaults to 117
imageStd: 128 // Optional, defaults to 1
});
const results = await ir.recognize({
image: _image,
outputName: "final_result"
});
if (results) {
this.setState({results: results, selectedImage: _image});
}
react-native: 0.52.1
const results = await tfImageRecognition.recognize({
image: require('../../assets/earth-alighted.jpeg'), // Also tried using "file://image_ocation.jpg" of cached image location from React-Native-Camera `takePictureAsync()`
inputName: "input",
inputSize: 224,
outputName: "output",
maxResults: 3,
threshold: 0.1,
});
Could it be because the property values inputSize, outputName...
are not correct? How would I determine what those values are?
I saw in issue #11 that the outputName
needs to be the same output name as the model used - I'm using the YOLO2 model. Could this maybe be the issue, that it's not mapping the return result correctly?
I follow the code in your example, it work fine when i react-native run-android
。
But,when i build in release version。
error, can't load resource
react-native bundle --entry-file App.js --platform android --dev false --bundle-output ./android/app/src/main/assets/index.android.bundle --assets-dest ./android/app/src/main/res/
I use App.js as my entry file not index.js
and genkey, set it in some file
adb install release.apk
success install, but can't load release when i open it
Hi Im receiving the error 'Running model failed: Not found: FetchOutputs node output: not found'. Not sure what I could be doing wrong I followed the doc and everything installed uneventfully.
`
export default class Album {
async _predict(){
const tfImageRecognition = new TfImageRecognition({
model: require('../../assets/retrained_graph.pb'),
labels: require('../../assets/retrained_labels.txt')
})
const results = await tfImageRecognition.recognize({
image: require('../../assets/1.jpg'),
})
.then(r => console.log(r))
.catch(err => console.log(err))
}
}
`
Hi, I'm trying to run on android, but I'm getting the following errors:
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:70: error: cannot find symbol
float[] srcData = readableArrayToFloatArray(data.getArray("data"));
^
symbol: method readableArrayToFloatArray(ReadableArray)
location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:73: error: cannot find symbol
int[] srcData = readableArrayToIntArray(data.getArray("data"));
^
symbol: method readableArrayToIntArray(ReadableArray)
location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:79: error: cannot find symbol
int[] srcData = readableArrayToIntArray(data.getArray("data"));
^
symbol: method readableArrayToIntArray(ReadableArray)
location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:82: error: cannot find symbol
byte[] srcData = readableArrayToByteBoolArray(data.getArray("data"));
^
symbol: method readableArrayToByteBoolArray(ReadableArray)
location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:85: error: cannot find symbol
byte[] srcData = readableArrayToByteStringArray(data.getArray("data"));
^
symbol: method readableArrayToByteStringArray(ReadableArray)
location: class RNTensorFlowInferenceModule
5 errors
:react-native-tensorflow:compileReleaseJavaWithJavac FAILED
FAILURE: Build failed with an exception.
I'm trying to run on ubuntu
Showing result name as Unknown i;e
results[0].name = unknown
As raised in issue #20, it impossible to feed the same model twice when using the regular RNTensorFlowInference API from the package. Therefore I had to close the model and open it again, but this is only working on android.
On iOS the close() function for this session raise a bug
"RNTensorFlowInference unrecognized selector sent to instance"
Which create a memory error if I keep creating and feeding new instances.
Can you please more detail describe how do you create model for that lib and what kind of model is it?
[Error: TransformError App.js: Cannot find module '/Users/akumar/Documents/TFRNdemo/node_modules/@react-native-community/cli/node_modules/metro-react-native-babel-transformer/src/index.js'
{
"name": "TFRNdemo",
"version": "0.0.1",
"private": true,
"scripts": {
"android": "react-native run-android",
"ios": "react-native run-ios",
"start": "react-native start",
"test": "jest",
"lint": "eslint ."
},
"dependencies": {
"react": "16.13.1",
"react-native": "0.63.1",
"react-native-tensorflow": "^0.1.8"
},
"devDependencies": {
"@babel/core": "^7.10.5",
"@babel/runtime": "^7.10.5",
"@react-native-community/eslint-config": "^2.0.0",
"babel-jest": "^26.1.0",
"eslint": "^7.4.0",
"jest": "^26.1.0",
"metro-react-native-babel-preset": "^0.60.0",
"react-test-renderer": "16.13.1"
},
"jest": {
"preset": "react-native"
}
}
I can not run the example in ios ,there is err: Failed to install the requested application
The bundle identifier of the application could not be determined.
Ensure that the application's Info.plist contains a value for CFBundleIdentifier.I don't know how to solve it.
I installed/linked the package but cannot run models that were built with tensorflow version > 1.3
Possible Unhandled Promise Rejection (id: 0):
05-26 21:42:48.749 6305 7014 W ReactNativeJS: Error: NodeDef mentions attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: conv1/Conv2D = Conv2D[T=DT_FLOAT, _output_shapes=[[64,48,48,96]], data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true](random_shuffle_queue_DequeueMany:1, conv1/kernel/read). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
I would like to spec out support for datatypes other than double
.
For instance, the following would be a data type choice.
tensorflow::Tensor input_data(tensorflow::DT_INT32, shape);
Through tf.feed
or tf.feedWithDims
there could be a dtype option
tf.feed(...args, 'int32');
Just cloned the repo, and ran the example as-is on a Genymotion android device with 4Gb RAM.
Everything seems to build fine, but got this error in the device:
Could not invoke RNImageRecognition.initImageRecognizer.
null
Failed to allocate a 53884608 byte allocation with 25165824 free bytes and 31MB until OOM, max allowed footprint 93269104, growth limit 100663296.
I can imagine it is an "hardware" problem but I find it a bit strange it's not working on a 4096MB RAM device for an image classifier.
Maybe use TensorFlow Lite instead ?
Any suggestion ?
Fetching files works only React Native versions < 0.57
Check this readme it works
I'm using tensorflow to detect data if true or false. It work perfectly on emulator device but after I generate .apk and run on real device. It always cause the error.
below is my code
const tfImageRecognition = new TfImageRecognition({
model: require('path/to/model.pb'),
labels: require('path/to/label.txt'),
imageMean: 0, // Optional, defaults to 117
imageStd: 255 // Optional, defaults to 1
})
tfImageRecognition.recognize({
image: this.state.imagePath,
inputName: "input_1", //Optional, defaults to "input"
inputSize: 224, //Optional, defaults to 224
outputName: "k2tfout_0", //Optional, defaults to "output"
maxResults: 3, //Optional, defaults to 3
threshold: 0.1 //Optional, defaults to 0.1
})
.then(results => {
console.warn('done');
ToastAndroid.show('tensowflow done', ToastAndroid.SHORT);
})
.catch(err => {
ToastAndroid.show(err, ToastAndroid.SHORT);
console.error('errorrrrrrrrrrrrrrrrrrrrr');
console.error(err);
})
when function tfImageRecognition.recognize was called
it always throw error to .catch
Hi there!
First, thanks and great work!
Secondly, I was able to train a chatbot and export the graph using protobuf format in Python, by following this tutorials
https://chatbotsmagazine.com/contextual-chat-bots-with-tensorflow-4391749d0077
https://cole-maclean.github.io/blog/RNN-Based-Subreddit-Recommender-System/
I was wandering if I could use this library to create a local chatbot for mobile.
Is that possible?
Seems that there is a memory leak in the iOS version of the library.
On subsequent uses of the library we can see memory usage steadily increase until the app eventually crashes.
My setup is something like below:
function categorizeImage(imagePath) {
const processor = new TfImageRecognition({
model: require('my-model.pb'),
labels: require('my-labels.txt'),
});
return processor.recognize({ image: imagePath, ..... });
}
Each time categorizeImage
is called we see memory usage increase. Images provided below.
Hi
While using TfImageRecognition I'm getting following error
Error: Could not create TensorFlow Graph: Invalid argument: No OpKernel was registered to support Op 'Switch' with these attrs.
But if i test the model with demo pb file provided by react-native-tensorflow examples its working fine.
I already added rn-cli.config.js file.
Please help me asap i need to submit my project in this week.
Hey! Thanks for all the great work.
I'm new to this so please let me know if I've missed something, but when I tried to build this via ./gradlew build
I got the following error:
/home/casey/gitrepo/react-native-tensorflow/android/src/main/java/com/rntensorflow/ResourceManager.java:65:
error: createClient() has private access in OkHttpClientProvider
Response response = OkHttpClientProvider.createClient().newCall(request).execute();
I assume an update was made that turned the .createClient()
method private so I've submitted my resolution in PR #35 but please let me know if there's a better way to address. Thanks again!
fatal error: 'tensorflow/core/framework/op_kernel.h' file not found
#include "tensorflow/core/framework/op_kernel.h"
Is it possible to replace TensorFlow with TensorFlow Lite without completely breaking the code? I'm just concerned about the larger .apk files sizes when using the entire TensorFlow library.
RNTensorFlowInferenceModule is missing a method to reset the context.
I see a method in the file RNTensorflowInference.java to reset the context, and another to get the context, but it's not callable from RNTensorFlowInferenceModule.
Also index.js is missing reset() too. And I didn't check iOS but it might also be missing since it's missing from index.js.
How to load image from local file ?
Hi there,
I had successfully tested your project using react-native int.
But when I want to use another graph.pb, it sends me this error:
NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: Preprocessor/map/TensorArray_2 = TensorArrayV3clear_after_read=true, dtype=DT_INT32, dynamic_size=false, element_shape=, identical_element_shapes=true, tensor_array_name="". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.)
How can I know the model is suitable for your library?
Any help will be appreciated!
Thanks in advance!
Hi,
Please go easy on me, i'm relatively new to tensorflow/ml I'm more of a web guy.
I have a frozen model trained with the Tensorflow Object Detection API and hope to use react-native to deploy to android/ios.
When you say "Next you will need to feed an image in the form of a number array", what tools do you use in react-native to convert the image to a number array?
Thanks
Matt
Is there any plans to support importing & using tesnorflow dense pose.
I tried to replicate what's in the docs at tfjs.
I installed the package @tensorflow-models/posenet
as well as the @tensorflow/[email protected]
.
then had a simple method to load the posenet model as follows:
componentDidMount() {
this.loadNet();
}
async getDensePose()
{
const imageScaleFactor = 0.50;
const flipHorizontal = false;
const outputStride = 16;
const net = await posenet.load();
const pose = net.estimateSinglePose(
require('./images/sample.jpg'),
imageScaleFactor,
flipHorizontal,
outputStride);
console.log('pose estimation result'+pose);
}
but unfortunately having this error:
Unhandled JS Exception: No backend found in registry.
Any idea !
Thanks
When trying to integrate a pretrained tensorflow model with expo (react-native), the following error occurs within these lines:
_graph = async () => {
var preder2 = null;
var items = "";
this.setState({result: "", value: null});
let result = await ImagePicker.launchImageLibraryAsync({
allowsEditing: true,
aspect: [4, 3],
base64: true,
});
if (!result.cancelled) {
this.setState({ image: result});
};
try {
const tfImageRecognition = new TfImageRecognition({
model: require('./assets/tensorflow_inception_graph.pb'),
labels: require('./assets/tensorflow_labels.txt')
});
const results = await tfImageRecognition.recognize({
image: this.state.image
});
results.forEach(
result => ((preder2 = result.confidence), (items = result.name))
);
await tfImageRecognition.close();
this.setState({
result: items,
value: preder2 * 100 + "%"
});
console.log(this.state.result);
} catch (err) {
this.setState({
result: "No Internet",
value: "Please connect to the internet"
});
console.log(err);
}
}
Which generates the following error
10:10:40 AM: undefined is not an object (evaluating 'RNImageRecognition.initImageRecognizer')
I have been trying to find the reason why this is not working but I cannot find a definite solution. The relative paths linking to the assets are correct and the extensions are present in the app.json. Furthermore the model is trained using the tensorflow api which should make it compatible with the react-native implementation.
I observed that after running
const tfImageRecognition = new TfImageRecognition({
model: require('./assets/tensorflow_inception_graph.pb'),
labels: require('./assets/tensorflow_labels.txt')
});
The code changed immediately to "catch(err)" branch, which means it could not load the model and labels?
I am using expo SDK version 28.0.0, Expo XDE and react-native-tensorflow version ^0.1.8
It complains that the input has already been fed before...
I guess it's not :(
What I did:
npx react-native bundle --platform android --dev false --entry-file index.js --bundle-output android/app/src/main/assets/index.android.bundle --assets-dest android/app/src/main/res
cd android && ./gradlew assembleDebug
and as expected I found the apk in this path
mobile-app/android/app/build/outputs/apk/debug/app-debug.apk
However, this .apk file is not working like it's supposed to like when I run the commands
npx react-native start
npx react-native run-android
Package.json:
{
"name": "Myapp",
"version": "0.0.1",
"private": true,
"scripts": {
"android": "react-native run-android",
"ios": "react-native run-ios",
"start": "react-native start",
"test": "jest",
"lint": "eslint ."
},
"dependencies": {
"@react-native-community/async-storage": "^1.12.1",
"@react-native-community/masked-view": "^0.1.11",
"@react-native-picker/picker": "^1.16.1",
"@react-navigation/bottom-tabs": "^5.11.11",
"@react-navigation/native": "^5.9.4",
"@react-navigation/stack": "^5.14.5",
"@teachablemachine/image": "^0.8.4",
"@tensorflow-models/mobilenet": "^2.1.0",
"@tensorflow/tfjs": "^3.7.0",
"@tensorflow/tfjs-core": "^3.7.0",
"@tensorflow/tfjs-react-native": "^0.5.0",
"axios": "^0.21.1",
"expo-av": "^9.1.2",
"expo-camera": "^11.0.3",
"expo-gl": "^10.3.0",
"expo-gl-cpp": "^10.3.0",
"expo-image-manipulator": "^9.1.0",
"lottie-react-native": "^4.0.2",
"react": "17.0.1",
"react-native": "0.64.2",
"react-native-fs": "^2.18.0",
"react-native-gesture-handler": "^1.10.3",
"react-native-picker-select": "^8.0.4",
"react-native-reanimated": "^2.2.0",
"react-native-responsive-screen": "^1.4.2",
"react-native-safe-area-context": "^3.2.0",
"react-native-screens": "^3.4.0",
"react-native-shapes": "^0.1.0",
"react-native-unimodules": "^0.13.3"
},
"devDependencies": {
"@babel/core": "^7.12.9",
"@babel/runtime": "^7.12.5",
"@react-native-community/eslint-config": "^2.0.0",
"babel-jest": "^26.6.3",
"eslint": "7.14.0",
"jest": "^26.6.3",
"metro-react-native-babel-preset": "^0.64.0",
"react-test-renderer": "17.0.1"
},
"jest": {
"preset": "react-native"
}
}
I have installed the library and trying to use the available example. When I am loading the model from../asset/tensorflow_inception_graph.pb
then it is giving me the error that unable to resolve the module. I have checked the path of model which is correct. Can anyone help to resolve this? Below I am showing the error log
error:
bundling failed: UnableToResolveError: Unable to resolve module ../asset/tensorflow_inception_graph.pb
from /Users/fazeel/CGapp/code/LoginDemo/src/screens/ImageRecognitionAI.js
: could not resolve `/Users/fazeel/CGapp/code/LoginDemo/src/asset/tensorflow_inception_graph.pb' as a file nor as a folder
at ModuleResolver._loadAsFileOrDirOrThrow (/Users/fazeel/CGapp/code/LoginDemo/node_modules/metro-bundler/src/node-haste/DependencyGraph/ModuleResolution.js:337:11)
i have trained a tensorflow model to identify language
how can i use it
When running the same model against the same picture(s), the predictions produced on the mobile device are vastly different than those produced by the linux VM. Anyone else seeing this?
wrong:
results.forEach(result =>
console.log(
results.id, // Id of the result
results.name, // Name of the result
results.confidence // Confidence value between 0 - 1
)
)
right:
results.forEach(result =>
console.log(
result.id, // Id of the result
result.name, // Name of the result
result.confidence // Confidence value between 0 - 1
)
)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.