GithubHelp home page GithubHelp logo

microsoft / cognitive-face-ios Goto Github PK

View Code? Open in Web Editor NEW
184.0 74.0 64.0 5.94 MB

iOS SDK for the Microsoft Face API, part of Cognitive Services

Home Page: https://www.microsoft.com/cognitive-services/en-us/face-api

License: Other

Ruby 0.26% Objective-C 98.65% Objective-C++ 1.10%
microsoft-cognitive-services microsoft sdk cocoapods face-api ios sample cognitive-services face

cognitive-face-ios's Introduction

Microsoft Face API: iOS Client Library & Sample

This repo contains the iOS client library & sample for the Microsoft Face API, an offering within Microsoft Cognitive Services, formerly known as Project Oxford.

The Client Library

The easiest way to consume the iOS client library is via CocoaPods. To install via Cocoapods:

  1. Install Cocoapods - Follow the getting started guide to install Cocoapods.
  2. Add the following to your Podfile : pod 'ProjectOxfordFace'.
  3. Run the command pod install to install the latest ProjectOxfordFace pod.
  4. Add #import <ProjectOxfordFace/MPOFaceSDK.h> to all files that need to reference the SDK.

The Sample

The sample app demonstrates the use of the Microsoft Face API iOS client library. The sample shows scenarios such as face detection, face verification, and face grouping.

Requirements

iOS must be version 8.1 or higher.

Building and running the sample

The sample app should already have the necessary Pods shipped with it. Open the ProjectOxfordFace.xcworkspace in Xcode and build.

  1. First, you must obtain a Face API subscription key by following the instructions on our website.
  2. Once in Xcode, under the example subdirectory, navigate to the file MPOAppDelegate.h and insert your subscription key for the Face API.
  3. To run the sample app, ensure that the target on top left side of Xcode is selected as ProjectOxfordFace-Example and select the play button or select Product > Run on the menu bar.
  4. Once the app is launched, click on the buttons to try out the different scenarios.

Microsoft will receive the images you upload and may use them to improve Face API and related services. By submitting an image, you confirm you have consent from everyone in it.

Having issues?

  1. Make sure you have selected ProjectOxfordFace-Example as the target.
  2. Make sure you have included the subscription key in MPOTestConstants.h.
  3. Make sure you have opened the .xcworkspace file and not the .xcodeproj file in Xcode.
  4. Make sure you have used the correct Deployment Team profile.
  5. Make sure you are running iOS 8.1 or higher.

Running and exploring the unit tests

Unit tests that demonstrate various Microsoft Cognitive Services scenarios such as detection, identification, grouping, similarity, verification, and face lists are located at Example/Tests.

To run the unit tests, first insert your subscription key in MPOTestConstants.h and then select the test navigator pane in Xcode to display all of the tests which can be run.

Contributing

We welcome contributions. Feel free to file issues and pull requests on the repo and we'll address them as we can. Learn more about how you can help on our Contribution Rules & Guidelines.

You can reach out to us anytime with questions and suggestions using our communities below:

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Updates

License

All Microsoft Cognitive Services SDKs and samples are licensed with the MIT License. For more details, see LICENSE.

Sample images are licensed separately, please refer to LICENSE-IMAGE.

Developer Code of Conduct

Developers using Cognitive Services, including this client library & sample, are expected to follow the “Developer Code of Conduct for Microsoft Cognitive Services”, found at http://go.microsoft.com/fwlink/?LinkId=698895.

cognitive-face-ios's People

Contributors

delfu avatar huxuan avatar lightfrenzy avatar llkpersonal avatar msftgits avatar naterickard avatar ryanga avatar thinkerjesse avatar wangjun-microsoft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cognitive-face-ios's Issues

mostEmotionValue error

e.g:

// self.mostEmotionValue  = 0,self.happiness = 0.464, self.sadness = 0.014
        if (self.happiness > self.mostEmotionValue)
        {
            self.mostEmotion = @"happiness";
            self.mostEmotionValue = self.happiness;
        }
        if (self.sadness > self.mostEmotionValue)
        {
            self.mostEmotion = @"sadness";
            self.mostEmotionValue = self.sadness;
        }

self.mostEmotion = @"sadness";
self.mostEmotionValue = 0.014;

Get the percentage of the detection completed

Hey I am Using The Face API for iOS.
I am using it for facial verification using the following method :
- (NSURLSessionDataTask *)detectWithData:(NSData *)data returnFaceId:(BOOL)returnFaceId returnFaceLandmarks:(BOOL)returnFaceLandmarks returnFaceAttributes:(NSArray *)returnFaceAttributes completionBlock:(MPOFaceArrayCompletionBlock)completion;

Is there way I can get the amount of verification process that is completed ?

Error Code : 415 Invalid Media Type.

Hello,

I am using Cognitive Face iOS Api for Face detection and I replace Subscription key in Two files 1) MPOTestConstants.h and 2) MPOAppDelegate.h file when run and demo and click on detection then i get below response from webservice

Get Response :

{ URL: https://api.projectoxford.ai/face/v1.0/detect?returnFaceAttributes=age,facialHair,headPose,smile,gender&returnFaceId=true&returnFaceLandmarks=true } { status code: 415, headers {
    "Cache-Control" = "no-cache";
    "Content-Length" = 64;
    "Content-Type" = "application/json; charset=utf-8";
    Date = "Tue, 18 Oct 2016 06:48:59 GMT";
    Expires = "-1";
    Pragma = "no-cache";
    "X-AspNet-Version" = "4.0.30319";
    "X-Powered-By" = "ASP.NET";
    "apim-request-id" = "ccfcf081-a3e7-4ff2-aef6-5c8d4b9c4b08";
} }

Error :

Error Domain=POFaceServiceClient error - http response is not success : {"error":{"code":"BadArgument","message":"Invalid Media Type."}} Code=415 "The operation couldn’t be completed. (POFaceServiceClient error - http response is not success : {"error":{"code":"BadArgument","message":"Invalid Media Type."}} error 415.)"

detection failed message.

I get a detection failed message.
Log shows:
ProjectOxfordFace_Example[23964:8663798] [Generic] Creating an image format with an unknown type is an error

Face Detection with error code 0

Hey,

We're using your Face detection and Face verification APIs form the SDK and we've also paid for the service still I'm getting one unknown error in the detection api. Here I'm attaching screenshot for the error.

Could you please provide the solution for this error?

Thank you!
9eb340b2e99113c09f0181b9a8f52fcd460d584cc318ce27dc pimgpsh_fullsize_distr

Face Detect APIErrorException: (404) Resource not found

I did the Azure Face Detect by Python, and I met the issue,

File "C:\ProgramData\Anaconda3\lib\site-packages\azure\cognitiveservices\vision\face\operations_face_operations.py", ###
line 549, in detect_with_url
raise models.APIErrorException(self._deserialize, response)

APIErrorException: (404) Resource not found

There is my code

import asyncio, io, glob, os, sys, time, uuid, requests
from urllib.parse import urlparse
from io import BytesIO
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, SnapshotObjectType, OperationStatusType

Set the FACE_SUBSCRIPTION_KEY environment variable with your key as the value.

This key will serve all examples in this document.

KEY = os.environ['FACE_SUBSCRIPTION_KEY']

Set the FACE_ENDPOINT environment variable with the endpoint from your Face service in Azure.

This endpoint will be used in all examples in this quickstart.

ENDPOINT = os.environ['FACE_ENDPOINT']

Create an authenticated FaceClient.

face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

Detect a face in an image that contains a single face

single_face_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Robert_Downey_Jr_2014_Comic_Con_%28cropped%29.jpg/220px-Robert_Downey_Jr_2014_Comic_Con_%28cropped%29.jpg'
single_image_name = os.path.basename(single_face_image_url)
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)
if not detected_faces:
raise Exception('No face detected from image {}'.format(single_image_name))

Display the detected face ID in the first single-face image.

Face IDs are used for comparison to faces (their IDs) detected in other images.

print('Detected face ID from', single_image_name, ':')
for face in detected_faces: print (face.face_id)
print()

Save this ID for use in Find Similar

first_image_face_ID = detected_faces[0].face_id

compile error at
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)

FACE_ENDPOINT is "https://eastasia.api.cognitive.microsoft.com/face/v1.0/detect/"

Do people have ideas about this issue?

Problem with Detect method in swift 3

Hi,

I'm trying to use the library using swift 3 and ran into a issue while tying to use detect:

if I try and call the method like this:

client.detect(with: data, returnFaceId: true, returnFaceLandmarks: true, returnFaceAttributes: []) { (faces, error) in //...code... }
everything works fine and i get the faces array correctly.

but when I try and add a face attribute type (instead of empty array as above), like this:

client.detect(with: data, returnFaceId: true, returnFaceLandmarks: true, returnFaceAttributes: [MPOFaceAttributeTypeAge, MPOFaceAttributeTypeHeadPose]) { (faces, error) in //...code... }

the program crashes with the following error:

*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[_SwiftValue isEqualToNumber:]: unrecognized selector sent to instance 0x1744502f0'

the error happens regardless of number and kind of MPOFaceAttributeType used
any ideas ? perhaps some syntax issue I§m missing ?

Dave

ListPersonsWithPersonGroupId not returning persisted Face Ids

It appears that the API has changed since this SDK was last updated. SDK seems to be expecting faceIds to be returned in the JSON:

https://github.com/Microsoft/Cognitive-Face-iOS/blob/master/Pod/Classes/MPOPerson.m#L39

Whereas the API docs state that persistedFaceIds will be returned:

https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241

I can confirm via usage that the saved Face Ids are not being returned currently.

How to detect emotion using Microsoft Face API iOS client library?

Hello,

I would like to use Microsoft cognitive services in my iOS app. Currently I am able to do some simple stuff with it, but I can not find the way to detect emotion in the photo.

Here is what I can do now:

let client = MPOFaceServiceClient.init(endpointAndSubscriptionKey: "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/", key: "MY_KEY")

func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
guard let image = info[UIImagePickerControllerOriginalImage] as? UIImage else {return}
var data = Data()
data = UIImageJPEGRepresentation(image, 0.7)!
client?.detect(with: data, returnFaceId: true, returnFaceLandmarks: true, returnFaceAttributes: nil) { (result, error) in
if let error = error {
print(error.localizedDescription)
} else {
guard let result = result else {return}
for face in result {
print(face.faceId)
let emotion = MPOFaceEmotion()

            }
        }
    }
}

The sample project is not that useful and I find out that it is very difficult to get any documentation for the iOS client library.

getPersonGroupTrainingStatusWithPersonGroupId/MPOTrainingStatus is out of date with the Face API

API docs for "Get Person Group Training Status"

getPersonGroupTrainingStatusWithPersonGroupId should be looking for these fields, per the above linked docs:

status
createdDateTime
lastActionDateTime
message

Instead, it's attempting to read non-existent result fields:

if (self) {
        self.personGroupId = dict[@"personGroupId"];
        self.status = dict[@"status"];
        self.startTime = dict[@"startTime"];
        self.endTime = dict[@"endTime"];
    }

Coach on swift 4.0 version

Hello team,
I have tried this SDK on swift 4.0 version and import MPOFaceSDK.h and #import <ProjectOxfordFace/MPOFaceSDK.h> in bridging header
after integration and apply the face detect code it get crash in completion block, says Thread 1: EXC_BAD_ACCESS (code=1, address=0x10) ,
please provide any SDK on swift version.
Or give any solution to implement this SDK on swift.

screen shot 2017-11-01 at 7 53 13 pm

getting linking Error with Xcode 9.2

Undefined symbols for architecture x86_64:
"OBJC_CLASS$_PersistedFace", referenced from:
objc-class-ref in MPOSimilarFaceViewController.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Example Code Not Working on iOS 11.1.2

When I select an image for processing I get the following error:
ProjectOxfordFace_Example[17589:7783160] [discovery] errors encountered while discovering extensions: Error Domain=PlugInKit Code=13 "query cancelled" UserInfo={NSLocalizedDescription=query cancelled}
When I call detect I get the following error:
ProjectOxfordFace_Example[17620:7786392] [BoringSSL] Function nw_protocol_boringssl_get_output_frames: line 1183 get output frames failed, state 8196

Could some one help look into this please?

Edit: I managed to fix this issue. Thanks!

POFaceServiceClient error

Error Domain=POFaceServiceClient error - http response is not success : { "statusCode": 404, "message": "Resource not found" } Code=404 "(null)"

I have set correct endpoint and subscription key, however, keeping get above error.
static NSString *const ProjectOxfordFaceSubscriptionKey = @"#################";
static NSString *const ProjectOxfordFaceEndpoint = @"https://southeastasia.api.cognitive.microsoft.com/face/v1.0";

I also tried Cognitive-Face-Android, it is very easy to use and no error.

IPV6 issue

Hello
Camera feature doesn't work well on an IPV6 environment, can you help me please?
Thank you

Swift support

I am building an app using Swift3.0 , I noticed this sdk supports ObjectiveC only.
Can you release new version of SDK for Swift?

Please let me know,
Thanks

What cause the error with code 404 ?

  1. Clone the repo.
  2. Insert subscription key for the Face API MPOAppDelegate.h (Verified @ 5)
  3. re-run pop install @ Cognitive-Face-iOS/Example
  4. run on device but seeing error while it detecting face at
    • (void)detectAction: (id)sender {
      => [client detectWithData:data returnFaceId:YES ....] completionBlock:^() { ... }
      }
  5. Verified EndPoint/Key
    EndPpoin:https://westus.api.cognitive.microsoft.com/face/v1.0;
    KEY:xxxx

error NSError * domain: @"POFaceServiceClient error - http response is not success : { "statusCode": 404, "message": "Resource not found" }" - code: 404 0x00000001c4251f70

  1. Before the error, there is another log
    ... ProjectOxfordFace_Example[299:14136] [discovery] errors encountered while discovering extensions: Error Domain=PlugInKit Code=13 "query cancelled" UserInfo={NSLocalizedDescription=query cancelled}

Q: What cause the error with code 404 ?
Am I missing something (UserInfo?)

POFaceServiceClient error

I built and ran the sample and change the subscription key and the End point with my credentials but when I tried to use the services every service return:

Domain=POFaceServiceClient error - http response is not success : { "statusCode": 404, "message": "Resource not found" }

I have a cognitive account and a free trial both with different end points and keys, I tried with both but I have gotten the same error.

Add these links with the images with the accounts, keys and end points, the debug screenshot and MPOAppDelegate.h

https://drive.google.com/open?id=0B0Q7Z65PAH9ocVBmcVlKOERFYmc
https://drive.google.com/open?id=0B0Q7Z65PAH9oanpGeGpWeGM5Rnc
https://drive.google.com/open?id=0B0Q7Z65PAH9oODV5aGx4S0pyUFk
https://drive.google.com/open?id=0B0Q7Z65PAH9oNW1FckFpVGE0dDA

I hope you can help me with this issues. (y)

"Resource not found" message.

Hi guys,

I've been trying to implement the service for a login view, I started the Face API service on my Azure account and already got the subscription keys, but so far I haven't been able to get this to work!
As a response, i get a "Resource not found" statusCode = 404.
I am in Mexico (not in the US), I'm afraid this could be the reason why this is not workin, any clue on this please?
Thank you in advance...

Getting - {"error":{"code":"404","message": "Resource not found"}}

Hi
I've been trying to implement the translator API, I started the API service on my Azure account and already got the subscription keys, but I haven't been able to get this to work!

As a response, I am getting a Message - "{"error":{"code":"404","message": "Resource not found"}}"
Using Below Code:
public static class AzureTranslator
{
private static readonly string subscriptionKey = "I am using generated key";
private static readonly string endpoint = "I am using generated endpoint";
private static readonly string location = "eastus";
public static async Task FunctionResult()
{
string route = "/translate?api-version=3.0&to=de";
string textToTranslate = "Hello, world!";
object[] body = new object[] { new { Text = textToTranslate } };
//var requestBody = JsonConvert.SerializeObject(ds.Tables[0]);
var requestBody = JsonConvert.SerializeObject(body);

        using (var client = new HttpClient())
        using (var request = new HttpRequestMessage())
        {
            // Build the request.
            request.Method = HttpMethod.Post;
            request.RequestUri = new Uri(endpoint + route);
            request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
            request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
            request.Headers.Add("Ocp-Apim-Subscription-Region", location);

            // Send the request and get response.
            HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
            // Read response as a string.
            //string result = await response.Content.ReadAsStringAsync();
            string result = await response.Content.ReadAsStringAsync();
            //return result;
        }
    }
}

createLargePersonGroup error with 1.4.0

Hi,

I am using CocoaPod 1.4.0 , coding in Swift and have no issues with :

  • Identify
  • Verify
  • Create Large Person Group
  • Create Person

I am having issues when trying to add an image to the person.

Code:

self.faceClient.addPersonFace(withLargePersonGroupId: self.faceLargePersonGroup, personId: self.facePersonID, data: imageData, userData: "Image created on:", faceRectangle: nil, completionBlock: { (mMPOAddPersistedFaceResult, error) in

if (error != nil) {
    print("Error in addPersonFace error is : \(String(describing: error))")
}
else {

Error:

43EF-BA8A-08EA75F0E115>.<1> finished with error - code: -1002
Error in addPersonFace error is : Optional(Error Domain=POFaceServiceClient error - http response is not success :  Code=0 "(null)")

Endpoint I am using works flawlessy with the other methods. Large Group ID exsist , PersonID exsist , Data is correct (I have the same error if I use url instead)

I need some guidance to identify the error , the error log is not helping , and solve the issue.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.