GithubHelp home page GithubHelp logo

brianmtully / flutter_google_ml_vision Goto Github PK

View Code? Open in Web Editor NEW
45.0 45.0 47.0 317 KB

Flutter Plugin for Google ML Kit Vision

License: BSD 3-Clause "New" or "Revised" License

Java 16.55% Objective-C 17.05% Dart 65.67% Ruby 0.73%

flutter_google_ml_vision's People

Contributors

brianmtully avatar lvlrsajjad avatar musthafa1996 avatar shliama avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flutter_google_ml_vision's Issues

Build error on Android

FAILURE: Build failed with an exception.

  • What went wrong:
    Execution failed for task ':onesignal_flutter:generateReleaseRFile'.

In project 'google_ml_vision' a resolved Google Play services library dependency depends on another at an exact version
(e.g. "[10.2.1, 17.3.99]", but isn't being resolved to that version. Behavior exhibited by the library will be unknown.

Dependency failing: com.onesignal:OneSignal:3.16.0 -> com.google.firebase:firebase-messaging@[10.2.1, 17.3.99], but fire
base-messaging version was 17.3.4.

The following dependencies are project dependencies that are direct or have transitive dependencies that lead to the art
ifact with the issue.
-- Project 'google_ml_vision' depends on project 'onesignal_flutter' which depends onto com.onesignal:[email protected]
-- Project 'google_ml_vision' depends on project 'onesignal_flutter' which depends onto com.onesignal:OneSignal@{strictl
y 3.16.0}
-- Project 'google_ml_vision' depends on project 'onesignal_flutter' which depends onto com.google.firebase:firebase-mes
saging@{strictly 17.3.4}

For extended debugging info execute Gradle from the command line with ./gradlew --info :google_ml_vision:assembleDebug t
o see the dependency paths to the artifact. This error message came from the strict-version-matcher-plugin Gradle plugin
, report issues at https://github.com/google/play-services-plugins and disable by removing the reference to the plugin (
"apply 'strict-version-matcher-plugin'") from build.gradle.

Returning detected Text from Image

Hello,

I am using the latest version and I have a quick & dirty function to take a photo and to read the text from that photo.

onPressed: () async {
                        final picker = ImagePicker();

                        final pickedFile = await picker.getImage(
                          source: ImageSource.camera,
                        );

                        var finalImageFile = File(pickedFile!.path);
                        logger.d(finalImageFile);
                        final GoogleVisionImage visionImage =
                            GoogleVisionImage.fromFile(finalImageFile);

                        final TextRecognizer textRecognizer =
                            GoogleVision.instance.textRecognizer();
                        final VisionText visionText =
                            await textRecognizer.processImage(visionImage);

                        String? text = visionText.text;
                        logger.d(text);

                        for (TextBlock block in visionText.blocks) {
                          final Rect boundingBox = block.boundingBox!;
                          final List<Offset> cornerPoints = block.cornerPoints;
                          final String? text = block.text;
                          final List<RecognizedLanguage> languages =
                              block.recognizedLanguages;

                          for (TextLine line in block.lines) {
                            // Same getters as TextBlock
                            for (TextElement element in line.elements) {
                              logger.d(element.text!);
                              logger.d('element');
                            }
                          }
                        }
                        logger.d('endloop');
                        logger.d(text!);
                        textRecognizer.close();
                      },

I am using a Logger Package, to debug on a physical device.

At first, following error appears immediately after opening the camera (So it's not related to this package)

[Camera] Failed to read exposureBiasesByMode dictionary: Error Domain=NSCocoaErrorDomain Code=4864 "*** -[NSKeyedUnarchiver _initForReadingFromData:error:throwLegacyExceptions:]: data is NULL" UserInfo={NSDebugDescription=*** -[NSKeyedUnarchiver _initForReadingFromData:error:throwLegacyExceptions:]: data is NULL}

After taking a photo it returns the path correctly.

But, how do I return all the detected text? This is not clear to me. Even though I have several log function in the foor loop for example, it doesn't return any text.

Is there something I'm doing wrong? I just need the recognised text, without any color, preview etc.

Thank you for any kind of help - it's really appreciated! :)

Bump version on pub.dev

Hey @brianmtully

Now the package is null safety.
I did few test on different devices it’s working as I expected.

please if you agree with me update in pub.dev.

I’m currently working on implementation of this package and soon I’ll share a new example hopefully you Marge it too…

Thank you for this!

The official firebase_ml_vision package has been dead/unmaintained for months now. This was a very easy switch-out and everything seems to work as before.

Thank you for creating this!

run example; google_ml_vision_example depends on e2e >=0.2.0+1 which doesn't support null safety

I'm trying to run the example and get the following, any help would be appreciated.

flutter_google_ml_vision\example>flutter run
Resolving dependencies... (2.2s)
The current Dart SDK version is 3.0.1.

Because google_ml_vision_example depends on e2e >=0.2.0+1 which doesn't support null safety, version solving
  failed.

The lower bound of "sdk: '>=2.1.0 <3.0.0'" must be 2.12.0 or higher to enable null safety.
For details, see https://dart.dev/null-safety

ML Vision dependencies

Hey, thanks for the plugin 🙌

I have a question regarding the Android & iOS dependencies, in the firebase_ml_vision plugin we had to manually specify whatever we want to use in either build.gradle or Podfile.

Here, I see these dependencies specified inside android/build.gradle:

implementation 'com.google.mlkit:face-detection:16.0.6'
implementation 'com.google.mlkit:barcode-scanning:16.1.1'
implementation 'com.google.mlkit:image-labeling:17.0.3'
implementation 'com.google.mlkit:object-detection:16.2.3'
implementation 'com.google.android.gms:play-services-mlkit-text-recognition:16.1.3'
implementation 'com.google.mlkit:language-id:16.1.1'

And these inside ios/google_ml_vision.podspec:

s.dependency 'GoogleMLKit/BarcodeScanning'
s.dependency 'GoogleMLKit/FaceDetection'
s.dependency 'GoogleMLKit/ImageLabeling'
s.dependency 'GoogleMLKit/TextRecognition'

Does it mean all these binaries are being packaged inside iOS/Android apps, even if I just want to use only Barcode scanning?

How are bounding boxes interpreted?

I followed the basic docs to create the FaceDetector. However, I'm not quite sure how the bounding box values are evoked, because the values I get are out of range for every mobile device.

Example:

Rect.fromLTRB(1285.0, 2859.0, 3054.0, 4627.0)

I want to add the bounding box to the taken image.

Am I missing something?

null safety

Hey @brianmtully
thank your for your awesome job!

are you planing to migrate your plugin to null safety?

App keeps crashing while using Google_ml_vision

I am using google_ml_vision in my app where I need to detect hand but my app keeps crashing on android. I haven't tested for IOS yet. I have tried it with the latest version (google_ml_vision: ^0.0.8) and with an older version ((google_ml_vision: ^0.0.7) . Following is the logs.

E/AndroidRuntime(14020): Process: com.synergates.dob.digital_onboarding.js, PID: 14020
E/AndroidRuntime(14020): java.lang.NoClassDefFoundError: Failed resolution of: Lcom/google/mlkit/vision/common/internal/Detector;
E/AndroidRuntime(14020): 	at com.brianmtully.flutter.plugins.googlemlvision.MlVisionHandler.handleDetection(GoogleMlVisionHandler.java:71)
E/AndroidRuntime(14020): 	at com.brianmtully.flutter.plugins.googlemlvision.MlVisionHandler.onMethodCall(GoogleMlVisionHandler.java:37)
E/AndroidRuntime(14020): 	at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler.onMessage(MethodChannel.java:258)
E/AndroidRuntime(14020): 	at io.flutter.embedding.engine.dart.DartMessenger.invokeHandler(DartMessenger.java:295)
E/AndroidRuntime(14020): 	at io.flutter.embedding.engine.dart.DartMessenger.lambda$dispatchMessageToQueue$0$io-flutter-embedding-engine-dart-DartMessenger(DartMessenger.java:322)
E/AndroidRuntime(14020): 	at io.flutter.embedding.engine.dart.DartMessenger$$ExternalSyntheticLambda0.run(Unknown Source:12)
E/AndroidRuntime(14020): 	at android.os.Handler.handleCallback(Handler.java:942)
E/AndroidRuntime(14020): 	at android.os.Handler.dispatchMessage(Handler.java:99)
E/AndroidRuntime(14020): 	at android.os.Looper.loopOnce(Looper.java:240)
E/AndroidRuntime(14020): 	at android.os.Looper.loop(Looper.java:351)
E/AndroidRuntime(14020): 	at android.app.ActivityThread.main(ActivityThread.java:8380)
E/AndroidRuntime(14020): 	at java.lang.reflect.Method.invoke(Native Method)
E/AndroidRuntime(14020): 	at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:584)
E/AndroidRuntime(14020): 	at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1013)
E/AndroidRuntime(14020): Caused by: java.lang.ClassNotFoundException: Didn't find class "com.google.mlkit.vision.common.internal.Detector" on path: DexPathList[[zip file "/data/app/~~o2uTxdDT44Su9DZpXChTgQ==/com.synergates.dob.digital_onboarding.js-EbcLJllgqv4-YKz7ykAixA==/base.apk"],nativeLibraryDirectories=[/data/app/~~o2uTxdDT44Su9DZpXChTgQ==/com.synergates.dob.digital_onboarding.js-EbcLJllgqv4-YKz7ykAixA==/lib/arm64, /data/app/~~o2uTxdDT44Su9DZpXChTgQ==/com.synergates.dob.digital_onboarding.js-EbcLJllgqv4-YKz7ykAixA==/base.apk!/lib/arm64-v8a, /system/lib64, /system_ext/lib64]]
E/AndroidRuntime(14020): 	at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:259)
E/AndroidRuntime(14020): 	at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
E/AndroidRuntime(14020): 	at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
E/AndroidRuntime(14020): 	... 14 more
I/Process (14020): Sending signal. PID: 14020 SIG: 9
Lost connection to device.

This is my code where I am using the package:

Future<void> _captureImage() async {
    _cameraController?.setFocusMode(FocusMode.auto);
    capturedImage = await _cameraController!.takePicture();
    log('capturing');
    if (capturedImage != null) {
      // processImage(File(capturedImage!.path));
      final ImageLabeler imageLabeler = GoogleVision.instance.imageLabeler(const ImageLabelerOptions());
      final GoogleVisionImage visionImage =
          GoogleVisionImage.fromFile(File(capturedImage!.path));
      final List<ImageLabel> labels =
          await imageLabeler.processImage(visionImage);
      if (labels.isEmpty) {
        log('label not found');
        _captureImage();
      } else {
        log('label found');
        for (var value in labels) {
          log('labels $value');
        }
      }
    }
  }

Face detection for iOS not working

Thanks for created this plugin. It's really useful. But i have a problem with this plugin. I used the example code but face detection totally not working for iOS. On Android it's working.

telegram-cloud-document-5-6154404972269142649.mp4

Device info:

  • iPhone 6S
  • iOS 14.4.1
log
[Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
[Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
flutter: path: /private/var/mobile/Containers/Data/Application/33276A8E-8373-41B4-8CA8-389B492DFD52/tmp/image_picker_EBC530C9-94FB-4F43-A3E4-ED922EB80521-2839-000001DA9BE2991F.jpg
flutter: []
2021-05-25 2:20:28.351 PM Nusawork[2839/0x1057d3880] [lvl=3] +[MLKITx_CCTClearcutUploader crashIfNecessary] Multiple instances of CCTClearcutUploader were instantiated. Multiple uploaders function correctly but have an adverse affect on battery performance due to lock contention.
Initialized TensorFlow Lite runtime.
[Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
[Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
flutter: path: /private/var/mobile/Containers/Data/Application/33276A8E-8373-41B4-8CA8-389B492DFD52/tmp/image_picker_7C9F0886-5B5A-4C57-84AC-E49A0B2B62F5-2839-000001DAB1875B05.jpg
flutter doctor -v
[✓] Flutter (Channel stable, 2.2.0, on Mac OS X 10.15.7 19H2 darwin-x64, locale en-EC)
    • Flutter version 2.2.0 at /Users/yudisetiawan/Downloads/flutter
    • Framework revision b22742018b (10 days ago), 2021-05-14 19:12:57 -0700
    • Engine revision a9d88a4d18
    • Dart version 2.13.0

[✓] Android toolchain - develop for Android devices (Android SDK version 30.0.2)
    • Android SDK at /Users/yudisetiawan/Library/Android/sdk
    • Platform android-30, build-tools 30.0.2
    • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
    • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)
    • All Android licenses accepted.

[✓] Xcode - develop for iOS and macOS
    • Xcode at /Applications/Xcode.app/Contents/Developer
    • Xcode 12.4, Build version 12D4e
    • CocoaPods version 1.10.1

[✓] Chrome - develop for the web
    • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome

[✓] Android Studio (version 4.1)
    • Android Studio at /Applications/Android Studio.app/Contents
    • Flutter plugin can be installed from:
      🔨 https://plugins.jetbrains.com/plugin/9212-flutter
    • Dart plugin can be installed from:
      🔨 https://plugins.jetbrains.com/plugin/6351-dart
    • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)

[✓] IntelliJ IDEA Community Edition (version 2020.2.3)
    • IntelliJ at /Applications/IntelliJ IDEA CE.app
    • Flutter plugin version 55.1.2
    • Dart plugin version 202.8443

[✓] VS Code (version 1.55.2)
    • VS Code at /Applications/Visual Studio Code.app/Contents
    • Flutter extension version 3.21.0

[✓] Connected device (3 available)
    • Yudi’s iPhone (mobile) • 1b7890540306c7d8155bacffabc03043d4c28bf9 • ios            • iOS 14.4.1
    • macOS (desktop)        • macos                                    • darwin-x64     • Mac OS X 10.15.7 19H2 darwin-x64
    • Chrome (web)           • chrome                                   • web-javascript • Google Chrome 90.0.4430.212

• No issues found!

[Android] Face detection only works with Samsung smartphones

I have initialized a face detector object using google_ml_vision v. ^5.0.0.

I'm using Flutter CameraController. Each time the method controller.startImageStream() is called, the image taken from CameraPreview is saved and processed in order to creare an image metadata object:

CameraImage? mlCameraImage;
GoogleVisionImageMetadata? mlMetaData;
  
Future<void> setInputImage(CameraImage image, int rotationDegrees) async {
    mlCameraImage = image;
    late ImageRotation rotation;
    switch(rotationDegrees) {
      case 0:
        rotation = ImageRotation.rotation0;
        break;
      case 90:
        rotation = ImageRotation.rotation90;
        break;
      case 180:
        rotation = ImageRotation.rotation180;
        break;
      case 270:
        rotation = ImageRotation.rotation270;
        break;
    }
    mlMetaData = GoogleVisionImageMetadata(
        rawFormat: image.format.raw,
        size: Size(image.width.toDouble(),image.height.toDouble()),
        planeData: image.planes.map((currentPlane) => GoogleVisionImagePlaneMetadata(
            bytesPerRow: currentPlane.bytesPerRow,
            height: currentPlane.height,
            width: currentPlane.width
        )).toList(),
        rotation: rotation,
    );
  }

Then I use mlCameraImage and mlMetaData as input values for face detection algorithm.

My detector is

_mlDetector = GoogleVision.instance.faceDetector(
        FaceDetectorOptions(enableClassification: true,
        enableContours: true)
    );

This configuration has excellent performances with Samsung smartphones, but doesn't actually work with other smartphones (for example, Xiaomi) or with tablets (even if Samsung tablets).

I tried to rotate my input image using all existing ImageRotation objects, but can't notice any particular change in my app behavior.

Any help would be very welcome, thanks!

Exception: Null check operator used on a null value

First of all... Love this package.
Thank you for all of the work you put in so far.

Now to my problem:
Everything worked like a charm, but
since the barcode detection process is pretty heavy, I want to outsource this
process in a seperate Isolate.
Here is my stripped (pseudo) code:

Future<void> callerFunction() async {
    String path = "getting path from camera (file is XFile)";
    await compute(doDetection, path);
}

FutureOr<String> doDetection(String imagePath) async {
    // initialize barcode detector
    final BarcodeDetector _barcodeDetector =
      GoogleVision.instance.barcodeDetector();

    // detect barcodes in camera image
    final GoogleVisionImage visionImage =
        GoogleVisionImage.fromFilePath(imagePath);

    // the error is thrown in 
    final List<Barcode> barcodes =
        await _barcodeDetector.detectInImage(visionImage);

    // other code here, but this code does not get executed after error is thrown
}

After the refactoring I get the following error:
Exception: Null check operator used on a null value
After a bit of digging, I am pretty sure that this is because of the following line:

final List<Barcode> barcodes =
        reply!.map((barcode) => Barcode._(barcode)).toList();

The reply is null.

So is it possible to do the recognition in a seperate Isolate? or how can I fix this issue?
Any help or hints is appreciated. If you need more information just let me know :).

Best regards, Louis :)

No text detected with a physical Android device

Using the latest version 0.0.5 and a physical Android phone on Android 11 API 30 I can't seem to get the example to detect text. This util method returns a VisionText instance that should have a list of blocks/lines/elements but it always returns an empty [] for it's blocks field. I have no issues using iOS. I'm not sure how to debug further, does anyone else have this issue?

flutter doctor -v

[✓] Flutter (Channel stable, 2.2.1, on Mac OS X 10.15.7 19H1030 darwin-x64, locale en-US)
    • Flutter version 2.2.1 at /Users/cswkim/Dev/flutter
    • Framework revision 02c026b03c (4 days ago), 2021-05-27 12:24:44 -0700
    • Engine revision 0fdb562ac8
    • Dart version 2.13.1

[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
    • Android SDK at /Users/cswkim/Library/Android/sdk
    • Platform android-29, build-tools 29.0.2
    • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
    • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)
    • All Android licenses accepted.

[✓] Xcode - develop for iOS and macOS
    • Xcode at /Applications/Xcode.app/Contents/Developer
    • Xcode 12.4, Build version 12D4e
    • CocoaPods version 1.10.1

[✓] Chrome - develop for the web
    • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome

[✓] Android Studio (version 4.1)
    • Android Studio at /Applications/Android Studio.app/Contents
    • Flutter plugin can be installed from:
      🔨 https://plugins.jetbrains.com/plugin/9212-flutter
    • Dart plugin can be installed from:
      🔨 https://plugins.jetbrains.com/plugin/6351-dart
    • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)

[✓] VS Code (version 1.56.2)
    • VS Code at /Applications/Visual Studio Code.app/Contents
    • Flutter extension version 3.22.0

FaceDetector randomly crashing APP on iOS

*** -[NSMutableArray addObjectsFromArray:]: array argument is not an NSArray
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSMutableArray addObjectsFromArray:]: array argument is not an NSArray'
*** First throw call stack:
(0x19998586c 0x1ae9a0c50 0x1999f5e1c 0x1999fc0ec 0x19986f1f4 0x106c23054 0x106c22334 0x19957824c 0x199579db0 0x1995877ac 0x19990111c 0x1998fb120 0x1998fa21c 0x1b14c6784 0x19c33aee8 0x19c34075c 0x104283b84 0x1995ba6b0)
libc++abi.dylib: terminating with uncaught exception of type NSException

  • thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
    frame #0: 0x00000001c78da414 libsystem_kernel.dylib`__pthread_kill + 8

Potential performance enhancements?

First and foremost, thank you so much for this package. I had no issues setting this up and it seems to be working as intended. This may be out of scope for the library but I wanted to explore any potential areas for performance gains. I'm testing on an older physical device (iPhone 6S from 2015) and it runs fairly smooth using the 100ms delay between detection calls and I even upped the resolution to high. On a 60fps device I guess the 100ms delay equates to roughly detecting every 6 frames? What are everyone else's experience regarding frame rendering, battery life, memory, CPU, etc.? Using DevTools performance profiler the raster UI and UI thread times are steady and on average well below the threshold for jank, so all good there.

I came across this older flutter issue discussing image processing in isolates and garbage collection and wondered if it applied at all to this package? Are isolates in general something that could be used for performance gains?

Face detection: face and landmark drawing incorrectly with Front camera on Android devices

Thanks a lot for this great and powerful widget,
I have the project needs this feature (face detection) and luckily I found this plug-in,
Everything works flawlessly except the Android Front camera: the rectangle draw on detected face is drawing incorrectly, see snapshots below. I've already fixed this by swapping the rectangle by 180 in vertical order. Not sure this is the optimized codes, let me know if any,

Screenshot_2

Screenshot_3

Moreover, I also add one more for loop to detect multiple faces instead of just only one face as your example,

Image stream

Is it possible to create a live image stream from camera and use it with the google ml functionalities, like barcode and text?

Cases where it doesn't work

I have been getting a lot of cases that the package cannot detect the face for some reason not only in android even in ios

Not able to detect text in Arabic language

Hello,

I am trying to detect text from an image which contains text written in Arabic language. But every time it return empty string.
Here's the minimum reproducible code :-

void checkText() async {
VisionText visionText;
visionImage = GoogleVisionImage.fromFile(File.fromUri(Uri.parse(prov.getImage.path);
TextRecognizer textRecognizer = GoogleVision.instance.textRecognizer();
visionText = await textRecognizer.processImage(visionImage);
print("Detetcted text ---> ${visionText.text}");
}

The link to image i am processing -> https://www.verifave.com/wp-content/uploads/2020/11/Old-Egyptian-Driving-License-.png

Can someone suggest what's wrong here or am I missing some configuration.

CameraPreviewScanner different results iOS

First thanks for your work to implement the standalone ML Kit because the firebase_ml_vision is not working with Flutter 2. I having different results with the example CameraPreviewScanner and PictureScanner. I try changing the ResolutionPreset.high and ResolutionPreset.veryHigh but still the same results.

Results from android 10, device Xiaomi mi 9T :
Screen Shot 2021-04-29 at 16 36 11

Result from iOS, device iPhone x with iOS 14:

Screen Shot 2021-04-29 at 16 28 05

Confidence is null of TextRecognizer

Confidence is null of TextRecognizer
The confidence of the blocks inside TextBlock is null, irregardless of the text I take a photo off. It resembles an issue at the firebase repo.

I get the arbitarily

return visionText.blocks[0].text

And I get the text mostly right (in English).

I would like to be able to take TextBlock with the highest confidence.

ALL_FACE contour positions wrong on iOS

Same flutter code, using the same contour indexes (based on official doc) - iOS clearly has an issue. The code worked fine with the dead firebase_ml_vision plugin.

The code below looks okay to me, since it's pretty much the same as on Android. But I guess there is some issue with the contour parts 🤔 order.

+ (id)getContourPoints:(MLKFace *)face contour:(MLKFaceContourType)contourType {

Android iOS
android ios

Error when compiling IOS

For some reason whenever I want to compile I get this error:

Undefined symbol: OBJC_CLASS$_MLKTextRecognizer
Undefined symbol: OBJC_CLASS$_MLKBarcodeScanner
Undefined symbol: OBJC_CLASS$_MLKImageLabelerOptions
Undefined symbol: OBJC_CLASS$_MLKVisionImage
Undefined symbol: OBJC_CLASS$_MLKImageLabeler
Undefined symbol: _MLKFaceLandmarkTypeRightCheek
Undefined symbol: _MLKFaceLandmarkTypeMouthLeft
Undefined symbol: OBJC_CLASS$_MLKBarcodeScannerOptions
Undefined symbol: _MLKFaceLandmarkTypeLeftEye
Undefined symbol: _MLKFaceContourTypeUpperLipBottom
Undefined symbol: _MLKFaceLandmarkTypeMouthBottom
Undefined symbol: _MLKFaceContourTypeLowerLipBottom
Undefined symbol: _MLKFaceLandmarkTypeLeftEar
Undefined symbol: _MLKFaceLandmarkTypeLeftCheek
Undefined symbol: _MLKFaceLandmarkTypeRightEar
Undefined symbol: _MLKFaceContourTypeNoseBottom
Undefined symbol: _MLKFaceContourTypeUpperLipTop
Undefined symbol: _MLKFaceContourTypeRightEyebrowBottom
Undefined symbol: _MLKFaceContourTypeLeftEyebrowTop
Undefined symbol: _MLKFaceContourTypeRightEyebrowTop
Undefined symbol: _MLKFaceContourTypeRightEye
Undefined symbol: _MLKFaceContourTypeNoseBridge
Undefined symbol: _MLKFaceContourTypeFace
Undefined symbol: _MLKFaceContourTypeLeftEye
Undefined symbol: _MLKFaceContourTypeLeftEyebrowBottom
Undefined symbol: _MLKFaceLandmarkTypeMouthRight
Undefined symbol: OBJC_CLASS$_MLKFaceDetector
Undefined symbol: _MLKFaceContourTypeLowerLipTop
Undefined symbol: _MLKFaceContourTypeLeftCheek
Undefined symbol: OBJC_CLASS$_MLKFaceDetectorOptions
Undefined symbol: _MLKFaceLandmarkTypeRightEye
Undefined symbol: _MLKFaceLandmarkTypeNoseBase
Undefined symbol: _MLKFaceContourTypeRightCheek

[BUG] Example not working on iOS

I am trying to get the example to work on iOS. I get the following output on the Debug console:

CocoaPods' output:

Preparing
Analyzing dependencies
Inspecting targets to integrate
Using ARCHS setting to build architectures of target Pods-Runner: (``)
Fetching external sources
-> Fetching podspec for Flutter from `Flutter`
-> Fetching podspec for `camera` from `.symlinks/plugins/camera/ios`
-> Fetching podspec for `e2e` from `.symlinks/plugins/e2e/ios`
-> Fetching podspec for `google_ml_vision` from `.symlinks/plugins/google_ml_vision/ios`
-> Fetching podspec for `image_picker` from `.symlinks/plugins/image_picker/ios`
-> Fetching podspec for `path_provider` from `.symlinks/plugins/path_provider/ios`
Resolving dependencies of `Podfile`
CDN: trunk Relative path: CocoaPods-version.yml exists! Returning local because checking is only perfomed in repo update
[!] CocoaPods could not find compatible versions for pod "google_ml_vision":
In Podfile:
google_ml_vision (from `.symlinks/plugins/google_ml_vision/ios`)
Specs satisfying the `google_ml_vision (from `.symlinks/plugins/google_ml_vision/ios`)` dependency were found, but they required a higher minimum deployment target.
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:328:in `raise_error_unless_state'
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:310:in `block in unwind_for_conflict'
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:308:in `tap'
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:308:in `unwind_for_conflict'
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:684:in `attempt_to_activate'
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:254:in `process_topmost_state'
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:182:in `resolve'
/Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolver.rb:43:in `resolve'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/resolver.rb:94:in `resolve'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer/analyzer.rb:1074:in `block in resolve_dependencies'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/user_interface.rb:64:in `section'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer/analyzer.rb:1072:in `resolve_dependencies'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer/analyzer.rb:124:in `analyze'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:414:in `analyze'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:239:in `block in resolve_dependencies'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/user_interface.rb:64:in `section'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:238:in `resolve_dependencies'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:160:in `install!'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/command/install.rb:52:in `run'
/Library/Ruby/Gems/2.6.0/gems/claide-1.0.3/lib/claide/command.rb:334:in `run'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/command.rb:52:in `run'
/Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/bin/pod:55:in `<top (required)>'
/usr/local/bin/pod:23:in `load'
/usr/local/bin/pod:23:in `

'
Error output from CocoaPods:

[!] Automatically assigning platform `iOS` with version `10.0` on target `Runner` because no platform was specified. Please specify a platform for this target in your Podfile. See `https://guides.cocoapods.org/syntax/podfile.html#platform`.
Exception: Error running pod install

Is there anything I missed, before I can run it?

Bug in code

After removing height and width the code works perfectly

class GoogleVisionImagePlaneMetadata {
GoogleVisionImagePlaneMetadata({
required this.bytesPerRow,
this.height,
this.width,
}) : assert(defaultTargetPlatform != TargetPlatform.iOS),
assert(defaultTargetPlatform != TargetPlatform.iOS);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.