GithubHelp home page GithubHelp logo

ma-dan / yolov3-coreml Goto Github PK

View Code? Open in Web Editor NEW

This project forked from hollance/yolo-coreml-mpsnngraph

171.0 10.0 46.0 251.41 MB

YOLOv3 for iOS implemented using CoreML.

License: MIT License

Swift 97.68% Python 2.32%
coreml deep-learning yolov3 swift ios

yolov3-coreml's Introduction

YOLOv3 with Core ML

This repo was forked and modified from hollance/YOLO-CoreML-MPSNNGraph. Some changes I made:

  1. Add YOLOv3 model.
  2. Only keep Keras converter.

About YOLO object detection

YOLO is an object detection network. It can detect multiple objects in an image and puts bounding boxes around these objects. Read hollance's blog post about YOLO to learn more about how it works.

YOLO in action

In this repo you'll find:

  • YOLOv3-CoreML: A demo app that runs the YOLOv3 neural network on Core ML.
  • Converter: The scripts needed to convert the original Keras YOLOv3 model to Core ML.

To run the app:

  1. Extract YOLOv3 CoreML model in YOLOv3 CoreML model folder and copy to YOLOv3-CoreML/YOLOv3-CoreML folder.
  2. Open the xcodeproj file in Xcode 9 and run it on a device with iOS 11 or better installed.

The reported "elapsed" time is how long it takes the YOLO neural net to process a single image. The FPS is the actual throughput achieved by the app.

NOTE: Running these kinds of neural networks eats up a lot of battery power. The app can put a limit on the number of times per second it runs the neural net. You can change this in setUpCamera() by changing the line videoCapture.fps = 50 to a smaller number.

Converting the models

NOTE: You don't need to convert the models yourself. Everything you need to run the demo apps is included in the Xcode projects already.

The model is converted from Keras h5 model, follow the Quick Start guide keras-yolo3 to get YOLOv3 Keras h5 model, then use coreml.py to convert h5 model to CoreML model.

yolov3-coreml's People

Contributors

hollance avatar karolkulesza avatar ma-dan avatar robertbiehl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolov3-coreml's Issues

YOLOv3, tiny-YOLOv3 weird bounding boxes on Iphone X

I'm having trouble with recognition of objects on iPhone X (and iPhone 6s as I tested it) using the YOLOv3-CoreML. Not changing anything in the code gives weird results as you can see in the picture below:
img_0047

Using the predict() function, it appears that iPhone X recognizes objects like the umbrella and others with 100% confidence. Changing confidenceThreshold and iouThreshold doesn't give any effect, as well as changing maxBoundingBoxes to 1-5. Using predictUsingVision does not produce any prediction at all on the screen.

The same problem I get with tiny-YOLOv3, YOLOv2 with custom and original models from Darknet using iPhone X.

However, using the same code on iPhone 6 (not s) produce an absolutely opposite result. It can predict objects successfully (using predict() function) with my custom model and original Darknet models:
photo_2018-09-11_16-35-01

I assumed that there is a problem with GPU changes Apple did start with iPhone 6s devices, but I haven't found any information about that. Does anybody have an issue like this? Did anybody try using the code on iPhone X device? I explicitly decided to test the code in the repository without any changes to provide a demonstration, the problem isn't about YOLOv3 or tiny-YOLOv3 or even YOLOv2 - the result is the same for me using any YOLO version.

You can find videos demonstrating the work on iPhone 6 and iPhone X:

iPhone X: https://imgur.com/a/EZOpr1W

iPhone 6: https://imgur.com/a/qQZtJAd

I will appreciate any help and assumptions you can provide, but as far as I tried to solve the problem for two weeks I didn't get the result, I wonder if there a positive cases using the code on iPhone X by other people.

EXC_BAD_ACCESS crash when running app with YOLOv3-SPP model trained on custom data.

I trained a YOLOv3-SPP model using custom data, I have 3 classes. I then converted it to a Keras .h5 file and then converted it to a CoreML. I ended up with a CoreML model whose Prediction section looks like this.

Screen Shot 2020-01-08 at 6 09 29 PM

I tried to use the app by adding the model to Xcode and changing the numClasses variable to 3, yet I still get this crash.

Screen Shot 2020-01-08 at 6 13 00 PM

I am quite new to the CoreML world so I don't have a good grasp on what info the MLMultiArrays hold and how to access it etc. What can I do to solve this crash? Is there something wrong with my model or is it just the app?

ValueError: Cannot create group in read only mode

i successfully trained a custom tiny yolo model using this repo https://github.com/qqwweee/keras-yolo3 and working properly during the testing. However when performing coreml conversion i received the following error

image

i perform the following conversion in the README without issue, so the problem should be lying in my custom tiny yolo model

The model is converted from Keras h5 model, follow the Quick Start guide keras-yolo3 to get YOLOv3 Keras h5 model, then use coreml.py to convert h5 model to CoreML model.

failed to extract the mlmodel

Hello,

Thanks for sharing your code, while trying to extract the Yolov3.zip.003, it can not finish extracting and raises error cannot load it

Any idea?

CoreML model cannot be deserialized

I am trying to do something new here. So I have the yolov2 model in frozen_pb format from tensorflow. I have successfully converted it to mlmodel and got it working using this repo.

Now, I have a another model where I have quantized the weights in the frozen_pb model from float32 to 8-bits (but the numbers are still in float32 format just the different levels of unique values are now just 8-bits i.e 255 unique float values only). This kinda compresses the model.

I was able to successfully convert the model using the tf-coreml repo. Same as the float32 model.

Now on adding this model to the Xcode.proj , it gives this error.
HERE

The model is still float32 data-type (and same size) . Any ideas where it might be going wrong ?

nonMaxsuppression on image still has many repeated bounding boxes

Hi, I am trying to deploy a custom YOLOv3 model using a modified version of your repository using only 1 image as input. the mlmodel was successfully created with the right input and output parameters and I tested the keras.h5 after converting from darknet weights to check if everything was ok. then on coreml, after calling let boundingBoxes = yolo.computeBoundingBoxes(features: [features, features, features]) in ViewController.swift and after nonMaxsuppression on YOLO.swift, I was printing the predictions to check if the object was being recognized(printing maxBoundingBoxes, class index, score and coordinates):

count:10
=========================
3
0.9994295
(108.99107360839844, 105.73644256591797, 167.4453125, 173.5439910888672)
=========================
3
0.9994295
(271.0824890136719, 60.53330993652344, 66.54877471923828, 39.44181823730469)
=========================
3
0.9994295
(63.08247756958008, 76.53330993652344, 66.54877471923828, 39.44181823730469)
=========================
3
0.9994295
(351.59149169921875, 10.979837417602539, 17.173877716064453, 26.294544219970703)
=========================
3
0.9994295
(247.59149169921875, 18.97983741760254, 17.173877716064453, 26.294544219970703)
=========================
3
0.9994295
(143.59149169921875, 26.97983741760254, 17.173877716064453, 26.294544219970703)
=========================
3
0.9994295
(39.59149169921875, 34.979835510253906, 17.173877716064453, 26.294544219970703)
=========================
37
0.7621713
(175.49517822265625, 288.18511962890625, 97.42070007324219, 0.0010935374302789569)
=========================
37
0.7621713
(-32.50482177734375, 304.18511962890625, 97.42070007324219, 0.0010935374302789569)
=========================
37
0.7621713
(307.5323486328125, 128.09246826171875, 25.140825271606445, 0.000729024934116751)

There was only 1 item in the picture (tomato is class index number 3)
image

I don't think it should get that many results if it's from only 1 frame, so maybe when I was modifying ViewController, something went wrong. This is my modified ViewController:

import UIKit
import Vision
import AVFoundation
import CoreMedia
import VideoToolbox

class ViewController: UIViewController {
  @IBOutlet weak var videoPreview: UIView!
  @IBOutlet weak var timeLabel: UILabel!
  @IBOutlet weak var debugImageView: UIImageView!

  let yolo = YOLO()

  var videoCapture: VideoCapture!
  var request: VNCoreMLRequest!

  var boundingBoxes = [BoundingBox]()
  var colors: [UIColor] = []

  let ciContext = CIContext()
  var resizedPixelBuffer: CVPixelBuffer?

  var framesDone = 0
  var frameCapturingStartTime = CACurrentMediaTime()
  let semaphore = DispatchSemaphore(value: 2)

  override func viewDidLoad() {
    super.viewDidLoad()

    setUpBoundingBoxes()
    setUpCoreImage()
    //setUpVision()


    startObjectDetection();
    // NOTE: If you choose another crop/scale option, then you must also
    // change how the BoundingBox objects get scaled when they are drawn.
    // Currently they assume the full input image is used.
    request.imageCropAndScaleOption = .scaleFill
    //setUpCamera()

    //frameCapturingStartTime = CACurrentMediaTime()
  }

  override func didReceiveMemoryWarning() {
    super.didReceiveMemoryWarning()
    print(#function)
  }

  // MARK: - Initialization
  func setUpBoundingBoxes() {
    for _ in 0..<YOLO.maxBoundingBoxes {
      boundingBoxes.append(BoundingBox())
    }

    // Make colors for the bounding boxes. There is one color for each class,
    for r: CGFloat in [0.2, 0.4, 0.6, 0.8, 1.0] {
      for g: CGFloat in [0.3, 0.7, 0.6, 0.8] {
        for b: CGFloat in [0.4, 0.8, 0.6, 1.0] {
          let color = UIColor(red: r, green: g, blue: b, alpha: 1)
          colors.append(color)
        }
      }
    }
  }

func startObjectDetection(tgtImg: UIImage){
        guard let model = try? VNCoreMLModel(for:Yolov3().model) else {
            print("failed to load model")
            return
        }
        let handler = VNImageRequestHandler(cgImage: tgtImg.cgImage!, options: [:])
        let request = createRequest(model: model)
        try? handler.perform([request])
    }


    func createRequest(model: VNCoreMLModel) -> VNCoreMLRequest {
            return VNCoreMLRequest(model: model, completionHandler: { (request, error) in
            DispatchQueue.main.async(execute: {
 
          if let observations = request.results as? [VNCoreMLFeatureValueObservation],
             let features = observations.first?.featureValue.multiArrayValue {

            let boundingBoxes = yolo.computeBoundingBoxes(features: [features, features, features])
            //let elapsed = CACurrentMediaTime() - startTimes.remove(at: 0)

            self.classIndexpredictions(predictions: boundingBoxes)
            //self.show(predictions: boundingBoxes)
          }
          })
        })
    }


  override var preferredStatusBarStyle: UIStatusBarStyle {
    return .lightContent
  }

  func resizePreviewLayer() {
    videoCapture.previewLayer?.frame = videoPreview.bounds
  }

  // MARK: - Doing inference
  func predict(image: UIImage) {
    if let pixelBuffer = image.pixelBuffer(width: YOLO.inputWidth, height: YOLO.inputHeight) {
      predict(pixelBuffer: pixelBuffer)
    }
  }

  func predict(pixelBuffer: CVPixelBuffer) {
    // Measure how long it takes to predict a single video frame.
    //let startTime = CACurrentMediaTime()

    // Resize the input with Core Image to 416x416.
    guard let resizedPixelBuffer = resizedPixelBuffer else { return }
    let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
    let sx = CGFloat(YOLO.inputWidth) / CGFloat(CVPixelBufferGetWidth(pixelBuffer))
    let sy = CGFloat(YOLO.inputHeight) / CGFloat(CVPixelBufferGetHeight(pixelBuffer))
    let scaleTransform = CGAffineTransform(scaleX: sx, y: sy)
    let scaledImage = ciImage.transformed(by: scaleTransform)
    ciContext.render(scaledImage, to: resizedPixelBuffer)

    if let boundingBoxes = try? yolo.predict(image: resizedPixelBuffer) {
      //let elapsed = CACurrentMediaTime() - startTime
      self.show(predictions: boundingBoxes)
    }
  }


  func classIndexpredictions: [YOLO.Prediction]) {
    for i in 0..<boundingBoxes.count {
      if i < predictions.count {
        let prediction = predictions[i]

        // The predicted bounding box is in the coordinate space of the input
        // image, which is a square image of 416x416 pixels. We want to show it
        // on the video preview, which is as wide as the screen and has a 4:3
        // aspect ratio. The video preview also may be letterboxed at the top
        // and bottom.
        let width = view.bounds.width
        let height = width * 4 / 3
        let scaleX = width / CGFloat(YOLO.inputWidth)
        let scaleY = height / CGFloat(YOLO.inputHeight)
        let top = (view.bounds.height - height) / 2

        // Translate and scale the rectangle to our own coordinate system.
        var rect = prediction.rect
        rect.origin.x *= scaleX
        rect.origin.y *= scaleY
        rect.origin.y += top
        rect.size.width *= scaleX
        rect.size.height *= scaleY

        // Show the bounding box.
        let label = String(format: "%@ %.1f", labels[prediction.classIndex], prediction.score * 100)
        let color = colors[prediction.classIndex]
        boundingBoxes[i].show(frame: rect, label: label, color: color)
      } else {
        boundingBoxes[i].hide()
      }
    }
        print("predictions ok")
  }
}

I also modified the filters on YOLO.swift because I only have 38 classes:

    assert(features[0].count == 129*13*13)
    assert(features[1].count == 129*26*26)
    assert(features[2].count == 129*52*52)

and on Helpers.swift I modified my labels but that should not affect the outcome. I am not using UIImage+CVPixelBuffer.swift, VideoCapture.swift and CVPixelBuffer+Helpers.swift because I just receive 1 photo the user took, so I am really only using ViewController, YOLO, BoundingBox and Helpers. This is also the first time I deal with swift code and I don't own a Macos so I cannot compile by myself and cannot validate if the code is ok or not in case something else is wrong.

Is it possible to find out why there is still so many bounding boxes?

here is my model config file yolo3cfg.txt just in case.

UPDATE: so I started over using only the repository and found out that in ViewController:255 we are calling the normal prefict(), but if we want to used a single image, we need the predictUsingVision() where the VNImageRequestHandler is. if I use that function, I get the result above, but if we use the normal Predict, it works just fine with my model. Why is VNImageRequestHandler giving such a different result?
thank you.

Getting error while trying the conversion

While I try to convert the model, I keep getting the error where
image

I am not sure what is wrong because the Keras model looks fine(I was able to start its training)

Convert to h5 is fine, but failed to further convert to coreml model

By following the description of keras-yolo3, I converted yolov3.weights to keras model yolo.h5. However, the conversion from yolo.h5 to its CoreML model failed. I pasted the detailed command line output as below. The error message and my environment details can be found in the bottom of this post. Sorry for the long command line output.

  1. convert yolov3.weights to keras model yolo.h5
$ python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
Using TensorFlow backend.
Loading weights.
Weights Header:  0 2 0 [32013312]
Parsing Darknet config.
Creating Keras model.
Parsing section net_0
Parsing section convolutional_0
conv2d bn leaky (3, 3, 3, 32)
2018-06-18 10:02:53.364525: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Parsing section convolutional_1
conv2d bn leaky (3, 3, 32, 64)
Parsing section convolutional_2
conv2d bn leaky (1, 1, 64, 32)
Parsing section convolutional_3
conv2d bn leaky (3, 3, 32, 64)
Parsing section shortcut_0
Parsing section convolutional_4
conv2d bn leaky (3, 3, 64, 128)
Parsing section convolutional_5
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_6
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_1
Parsing section convolutional_7
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_8
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_2
Parsing section convolutional_9
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_10
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_11
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_3
Parsing section convolutional_12
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_13
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_4
Parsing section convolutional_14
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_15
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_5
Parsing section convolutional_16
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_17
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_6
Parsing section convolutional_18
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_19
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_7
Parsing section convolutional_20
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_21
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_8
Parsing section convolutional_22
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_23
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_9
Parsing section convolutional_24
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_25
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_10
Parsing section convolutional_26
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_27
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_28
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_11
Parsing section convolutional_29
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_30
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_12
Parsing section convolutional_31
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_32
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_13
Parsing section convolutional_33
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_34
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_14
Parsing section convolutional_35
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_36
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_15
Parsing section convolutional_37
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_38
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_16
Parsing section convolutional_39
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_40
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_17
Parsing section convolutional_41
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_42
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_18
Parsing section convolutional_43
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_44
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_45
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_19
Parsing section convolutional_46
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_47
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_20
Parsing section convolutional_48
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_49
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_21
Parsing section convolutional_50
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_51
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_22
Parsing section convolutional_52
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_53
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_54
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_55
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_56
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_57
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_58
conv2d    linear (1, 1, 1024, 255)
Parsing section yolo_0
Parsing section route_0
Parsing section convolutional_59
conv2d bn leaky (1, 1, 512, 256)
Parsing section upsample_0
Parsing section route_1
Concatenating route layers: [<tf.Tensor 'up_sampling2d_1/ResizeNearestNeighbor:0' shape=(?, ?, ?, 256) dtype=float32>, <tf.Tensor 'add_19/add:0' shape=(?, ?, ?, 512) dtype=float32>]
Parsing section convolutional_60
conv2d bn leaky (1, 1, 768, 256)
Parsing section convolutional_61
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_62
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_63
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_64
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_65
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_66
conv2d    linear (1, 1, 512, 255)
Parsing section yolo_1
Parsing section route_2
Parsing section convolutional_67
conv2d bn leaky (1, 1, 256, 128)
Parsing section upsample_1
Parsing section route_3
Concatenating route layers: [<tf.Tensor 'up_sampling2d_2/ResizeNearestNeighbor:0' shape=(?, ?, ?, 128) dtype=float32>, <tf.Tensor 'add_11/add:0' shape=(?, ?, ?, 256) dtype=float32>]
Parsing section convolutional_68
conv2d bn leaky (1, 1, 384, 128)
Parsing section convolutional_69
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_70
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_71
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_72
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_73
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_74
conv2d    linear (1, 1, 256, 255)
Parsing section yolo_2
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            (None, None, None, 3 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, None, None, 3 864         input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, None, 3 128         conv2d_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU)       (None, None, None, 3 0           batch_normalization_1[0][0]
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, None, None, 3 0           leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, None, None, 6 18432       zero_padding2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, None, 6 256         conv2d_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU)       (None, None, None, 6 0           batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, None, None, 3 2048        leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, None, 3 128         conv2d_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU)       (None, None, None, 3 0           batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, None, None, 6 18432       leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, None, None, 6 256         conv2d_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU)       (None, None, None, 6 0           batch_normalization_4[0][0]
__________________________________________________________________________________________________
add_1 (Add)                     (None, None, None, 6 0           leaky_re_lu_2[0][0]
                                                                 leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
zero_padding2d_2 (ZeroPadding2D (None, None, None, 6 0           add_1[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, None, None, 1 73728       zero_padding2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, None, None, 1 512         conv2d_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU)       (None, None, None, 1 0           batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, None, None, 6 8192        leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, None, None, 6 256         conv2d_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU)       (None, None, None, 6 0           batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, None, None, 1 73728       leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, None, None, 1 512         conv2d_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU)       (None, None, None, 1 0           batch_normalization_7[0][0]
__________________________________________________________________________________________________
add_2 (Add)                     (None, None, None, 1 0           leaky_re_lu_5[0][0]
                                                                 leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, None, None, 6 8192        add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, None, None, 6 256         conv2d_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU)       (None, None, None, 6 0           batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, None, None, 1 73728       leaky_re_lu_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, None, None, 1 512         conv2d_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU)       (None, None, None, 1 0           batch_normalization_9[0][0]
__________________________________________________________________________________________________
add_3 (Add)                     (None, None, None, 1 0           add_2[0][0]
                                                                 leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
zero_padding2d_3 (ZeroPadding2D (None, None, None, 1 0           add_3[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, None, None, 2 294912      zero_padding2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, None, None, 2 1024        conv2d_10[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, None, None, 1 32768       leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, None, None, 1 512         conv2d_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, None, None, 2 1024        conv2d_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_12[0][0]
__________________________________________________________________________________________________
add_4 (Add)                     (None, None, None, 2 0           leaky_re_lu_10[0][0]
                                                                 leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, None, None, 1 32768       add_4[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, None, None, 1 512         conv2d_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, None, None, 2 1024        conv2d_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_14[0][0]
__________________________________________________________________________________________________
add_5 (Add)                     (None, None, None, 2 0           add_4[0][0]
                                                                 leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, None, None, 1 32768       add_5[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, None, None, 1 512         conv2d_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, None, None, 2 1024        conv2d_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_16[0][0]
__________________________________________________________________________________________________
add_6 (Add)                     (None, None, None, 2 0           add_5[0][0]
                                                                 leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, None, None, 1 32768       add_6[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, None, None, 1 512         conv2d_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, None, None, 2 1024        conv2d_18[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_18[0][0]
__________________________________________________________________________________________________
add_7 (Add)                     (None, None, None, 2 0           add_6[0][0]
                                                                 leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, None, None, 1 32768       add_7[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, None, None, 1 512         conv2d_19[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, None, None, 2 1024        conv2d_20[0][0]
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_20[0][0]
__________________________________________________________________________________________________
add_8 (Add)                     (None, None, None, 2 0           add_7[0][0]
                                                                 leaky_re_lu_20[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, None, None, 1 32768       add_8[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, None, None, 1 512         conv2d_21[0][0]
__________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_21[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_21[0][0]
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, None, None, 2 1024        conv2d_22[0][0]
__________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_22[0][0]
__________________________________________________________________________________________________
add_9 (Add)                     (None, None, None, 2 0           add_8[0][0]
                                                                 leaky_re_lu_22[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D)              (None, None, None, 1 32768       add_9[0][0]
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, None, None, 1 512         conv2d_23[0][0]
__________________________________________________________________________________________________
leaky_re_lu_23 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_23[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_23[0][0]
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, None, None, 2 1024        conv2d_24[0][0]
__________________________________________________________________________________________________
leaky_re_lu_24 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_24[0][0]
__________________________________________________________________________________________________
add_10 (Add)                    (None, None, None, 2 0           add_9[0][0]
                                                                 leaky_re_lu_24[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D)              (None, None, None, 1 32768       add_10[0][0]
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, None, None, 1 512         conv2d_25[0][0]
__________________________________________________________________________________________________
leaky_re_lu_25 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_25[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_25[0][0]
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, None, None, 2 1024        conv2d_26[0][0]
__________________________________________________________________________________________________
leaky_re_lu_26 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_26[0][0]
__________________________________________________________________________________________________
add_11 (Add)                    (None, None, None, 2 0           add_10[0][0]
                                                                 leaky_re_lu_26[0][0]
__________________________________________________________________________________________________
zero_padding2d_4 (ZeroPadding2D (None, None, None, 2 0           add_11[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D)              (None, None, None, 5 1179648     zero_padding2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, None, None, 5 2048        conv2d_27[0][0]
__________________________________________________________________________________________________
leaky_re_lu_27 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_27[0][0]
__________________________________________________________________________________________________
conv2d_28 (Conv2D)              (None, None, None, 2 131072      leaky_re_lu_27[0][0]
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, None, None, 2 1024        conv2d_28[0][0]
__________________________________________________________________________________________________
leaky_re_lu_28 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_28[0][0]
__________________________________________________________________________________________________
conv2d_29 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_28[0][0]
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, None, None, 5 2048        conv2d_29[0][0]
__________________________________________________________________________________________________
leaky_re_lu_29 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_29[0][0]
__________________________________________________________________________________________________
add_12 (Add)                    (None, None, None, 5 0           leaky_re_lu_27[0][0]
                                                                 leaky_re_lu_29[0][0]
__________________________________________________________________________________________________
conv2d_30 (Conv2D)              (None, None, None, 2 131072      add_12[0][0]
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, None, None, 2 1024        conv2d_30[0][0]
__________________________________________________________________________________________________
leaky_re_lu_30 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_30[0][0]
__________________________________________________________________________________________________
conv2d_31 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_30[0][0]
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, None, None, 5 2048        conv2d_31[0][0]
__________________________________________________________________________________________________
leaky_re_lu_31 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_31[0][0]
__________________________________________________________________________________________________
add_13 (Add)                    (None, None, None, 5 0           add_12[0][0]
                                                                 leaky_re_lu_31[0][0]
__________________________________________________________________________________________________
conv2d_32 (Conv2D)              (None, None, None, 2 131072      add_13[0][0]
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, None, None, 2 1024        conv2d_32[0][0]
__________________________________________________________________________________________________
leaky_re_lu_32 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_32[0][0]
__________________________________________________________________________________________________
conv2d_33 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_32[0][0]
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, None, None, 5 2048        conv2d_33[0][0]
__________________________________________________________________________________________________
leaky_re_lu_33 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_33[0][0]
__________________________________________________________________________________________________
add_14 (Add)                    (None, None, None, 5 0           add_13[0][0]
                                                                 leaky_re_lu_33[0][0]
__________________________________________________________________________________________________
conv2d_34 (Conv2D)              (None, None, None, 2 131072      add_14[0][0]
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, None, None, 2 1024        conv2d_34[0][0]
__________________________________________________________________________________________________
leaky_re_lu_34 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_34[0][0]
__________________________________________________________________________________________________
conv2d_35 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_34[0][0]
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, None, None, 5 2048        conv2d_35[0][0]
__________________________________________________________________________________________________
leaky_re_lu_35 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_35[0][0]
__________________________________________________________________________________________________
add_15 (Add)                    (None, None, None, 5 0           add_14[0][0]
                                                                 leaky_re_lu_35[0][0]
__________________________________________________________________________________________________
conv2d_36 (Conv2D)              (None, None, None, 2 131072      add_15[0][0]
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, None, None, 2 1024        conv2d_36[0][0]
__________________________________________________________________________________________________
leaky_re_lu_36 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_36[0][0]
__________________________________________________________________________________________________
conv2d_37 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_36[0][0]
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, None, None, 5 2048        conv2d_37[0][0]
__________________________________________________________________________________________________
leaky_re_lu_37 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_37[0][0]
__________________________________________________________________________________________________
add_16 (Add)                    (None, None, None, 5 0           add_15[0][0]
                                                                 leaky_re_lu_37[0][0]
__________________________________________________________________________________________________
conv2d_38 (Conv2D)              (None, None, None, 2 131072      add_16[0][0]
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, None, None, 2 1024        conv2d_38[0][0]
__________________________________________________________________________________________________
leaky_re_lu_38 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_38[0][0]
__________________________________________________________________________________________________
conv2d_39 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_38[0][0]
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, None, None, 5 2048        conv2d_39[0][0]
__________________________________________________________________________________________________
leaky_re_lu_39 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_39[0][0]
__________________________________________________________________________________________________
add_17 (Add)                    (None, None, None, 5 0           add_16[0][0]
                                                                 leaky_re_lu_39[0][0]
__________________________________________________________________________________________________
conv2d_40 (Conv2D)              (None, None, None, 2 131072      add_17[0][0]
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, None, None, 2 1024        conv2d_40[0][0]
__________________________________________________________________________________________________
leaky_re_lu_40 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_40[0][0]
__________________________________________________________________________________________________
conv2d_41 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_40[0][0]
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, None, None, 5 2048        conv2d_41[0][0]
__________________________________________________________________________________________________
leaky_re_lu_41 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_41[0][0]
__________________________________________________________________________________________________
add_18 (Add)                    (None, None, None, 5 0           add_17[0][0]
                                                                 leaky_re_lu_41[0][0]
__________________________________________________________________________________________________
conv2d_42 (Conv2D)              (None, None, None, 2 131072      add_18[0][0]
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, None, None, 2 1024        conv2d_42[0][0]
__________________________________________________________________________________________________
leaky_re_lu_42 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_42[0][0]
__________________________________________________________________________________________________
conv2d_43 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_42[0][0]
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, None, None, 5 2048        conv2d_43[0][0]
__________________________________________________________________________________________________
leaky_re_lu_43 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_43[0][0]
__________________________________________________________________________________________________
add_19 (Add)                    (None, None, None, 5 0           add_18[0][0]
                                                                 leaky_re_lu_43[0][0]
__________________________________________________________________________________________________
zero_padding2d_5 (ZeroPadding2D (None, None, None, 5 0           add_19[0][0]
__________________________________________________________________________________________________
conv2d_44 (Conv2D)              (None, None, None, 1 4718592     zero_padding2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, None, None, 1 4096        conv2d_44[0][0]
__________________________________________________________________________________________________
leaky_re_lu_44 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_44[0][0]
__________________________________________________________________________________________________
conv2d_45 (Conv2D)              (None, None, None, 5 524288      leaky_re_lu_44[0][0]
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, None, None, 5 2048        conv2d_45[0][0]
__________________________________________________________________________________________________
leaky_re_lu_45 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_45[0][0]
__________________________________________________________________________________________________
conv2d_46 (Conv2D)              (None, None, None, 1 4718592     leaky_re_lu_45[0][0]
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, None, None, 1 4096        conv2d_46[0][0]
__________________________________________________________________________________________________
leaky_re_lu_46 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_46[0][0]
__________________________________________________________________________________________________
add_20 (Add)                    (None, None, None, 1 0           leaky_re_lu_44[0][0]
                                                                 leaky_re_lu_46[0][0]
__________________________________________________________________________________________________
conv2d_47 (Conv2D)              (None, None, None, 5 524288      add_20[0][0]
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, None, None, 5 2048        conv2d_47[0][0]
__________________________________________________________________________________________________
leaky_re_lu_47 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_47[0][0]
__________________________________________________________________________________________________
conv2d_48 (Conv2D)              (None, None, None, 1 4718592     leaky_re_lu_47[0][0]
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, None, None, 1 4096        conv2d_48[0][0]
__________________________________________________________________________________________________
leaky_re_lu_48 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_48[0][0]
__________________________________________________________________________________________________
add_21 (Add)                    (None, None, None, 1 0           add_20[0][0]
                                                                 leaky_re_lu_48[0][0]
__________________________________________________________________________________________________
conv2d_49 (Conv2D)              (None, None, None, 5 524288      add_21[0][0]
__________________________________________________________________________________________________
batch_normalization_49 (BatchNo (None, None, None, 5 2048        conv2d_49[0][0]
__________________________________________________________________________________________________
leaky_re_lu_49 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_49[0][0]
__________________________________________________________________________________________________
conv2d_50 (Conv2D)              (None, None, None, 1 4718592     leaky_re_lu_49[0][0]
__________________________________________________________________________________________________
batch_normalization_50 (BatchNo (None, None, None, 1 4096        conv2d_50[0][0]
__________________________________________________________________________________________________
leaky_re_lu_50 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_50[0][0]
__________________________________________________________________________________________________
add_22 (Add)                    (None, None, None, 1 0           add_21[0][0]
                                                                 leaky_re_lu_50[0][0]
__________________________________________________________________________________________________
conv2d_51 (Conv2D)              (None, None, None, 5 524288      add_22[0][0]
__________________________________________________________________________________________________
batch_normalization_51 (BatchNo (None, None, None, 5 2048        conv2d_51[0][0]
__________________________________________________________________________________________________
leaky_re_lu_51 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_51[0][0]
__________________________________________________________________________________________________
conv2d_52 (Conv2D)              (None, None, None, 1 4718592     leaky_re_lu_51[0][0]
__________________________________________________________________________________________________
batch_normalization_52 (BatchNo (None, None, None, 1 4096        conv2d_52[0][0]
__________________________________________________________________________________________________
leaky_re_lu_52 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_52[0][0]
__________________________________________________________________________________________________
add_23 (Add)                    (None, None, None, 1 0           add_22[0][0]
                                                                 leaky_re_lu_52[0][0]
__________________________________________________________________________________________________
conv2d_53 (Conv2D)              (None, None, None, 5 524288      add_23[0][0]
__________________________________________________________________________________________________
batch_normalization_53 (BatchNo (None, None, None, 5 2048        conv2d_53[0][0]
__________________________________________________________________________________________________
leaky_re_lu_53 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_53[0][0]
__________________________________________________________________________________________________
conv2d_54 (Conv2D)              (None, None, None, 1 4718592     leaky_re_lu_53[0][0]
__________________________________________________________________________________________________
batch_normalization_54 (BatchNo (None, None, None, 1 4096        conv2d_54[0][0]
__________________________________________________________________________________________________
leaky_re_lu_54 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_54[0][0]
__________________________________________________________________________________________________
conv2d_55 (Conv2D)              (None, None, None, 5 524288      leaky_re_lu_54[0][0]
__________________________________________________________________________________________________
batch_normalization_55 (BatchNo (None, None, None, 5 2048        conv2d_55[0][0]
__________________________________________________________________________________________________
leaky_re_lu_55 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_55[0][0]
__________________________________________________________________________________________________
conv2d_56 (Conv2D)              (None, None, None, 1 4718592     leaky_re_lu_55[0][0]
__________________________________________________________________________________________________
batch_normalization_56 (BatchNo (None, None, None, 1 4096        conv2d_56[0][0]
__________________________________________________________________________________________________
leaky_re_lu_56 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_56[0][0]
__________________________________________________________________________________________________
conv2d_57 (Conv2D)              (None, None, None, 5 524288      leaky_re_lu_56[0][0]
__________________________________________________________________________________________________
batch_normalization_57 (BatchNo (None, None, None, 5 2048        conv2d_57[0][0]
__________________________________________________________________________________________________
leaky_re_lu_57 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_57[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D)              (None, None, None, 2 131072      leaky_re_lu_57[0][0]
__________________________________________________________________________________________________
batch_normalization_59 (BatchNo (None, None, None, 2 1024        conv2d_60[0][0]
__________________________________________________________________________________________________
leaky_re_lu_59 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_59[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D)  (None, None, None, 2 0           leaky_re_lu_59[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, None, None, 7 0           up_sampling2d_1[0][0]
                                                                 add_19[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D)              (None, None, None, 2 196608      concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_60 (BatchNo (None, None, None, 2 1024        conv2d_61[0][0]
__________________________________________________________________________________________________
leaky_re_lu_60 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_60[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_60[0][0]
__________________________________________________________________________________________________
batch_normalization_61 (BatchNo (None, None, None, 5 2048        conv2d_62[0][0]
__________________________________________________________________________________________________
leaky_re_lu_61 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_61[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D)              (None, None, None, 2 131072      leaky_re_lu_61[0][0]
__________________________________________________________________________________________________
batch_normalization_62 (BatchNo (None, None, None, 2 1024        conv2d_63[0][0]
__________________________________________________________________________________________________
leaky_re_lu_62 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_62[0][0]
__________________________________________________________________________________________________
conv2d_64 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_62[0][0]
__________________________________________________________________________________________________
batch_normalization_63 (BatchNo (None, None, None, 5 2048        conv2d_64[0][0]
__________________________________________________________________________________________________
leaky_re_lu_63 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_63[0][0]
__________________________________________________________________________________________________
conv2d_65 (Conv2D)              (None, None, None, 2 131072      leaky_re_lu_63[0][0]
__________________________________________________________________________________________________
batch_normalization_64 (BatchNo (None, None, None, 2 1024        conv2d_65[0][0]
__________________________________________________________________________________________________
leaky_re_lu_64 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_64[0][0]
__________________________________________________________________________________________________
conv2d_68 (Conv2D)              (None, None, None, 1 32768       leaky_re_lu_64[0][0]
__________________________________________________________________________________________________
batch_normalization_66 (BatchNo (None, None, None, 1 512         conv2d_68[0][0]
__________________________________________________________________________________________________
leaky_re_lu_66 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_66[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D)  (None, None, None, 1 0           leaky_re_lu_66[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, None, None, 3 0           up_sampling2d_2[0][0]
                                                                 add_11[0][0]
__________________________________________________________________________________________________
conv2d_69 (Conv2D)              (None, None, None, 1 49152       concatenate_2[0][0]
__________________________________________________________________________________________________
batch_normalization_67 (BatchNo (None, None, None, 1 512         conv2d_69[0][0]
__________________________________________________________________________________________________
leaky_re_lu_67 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_67[0][0]
__________________________________________________________________________________________________
conv2d_70 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_67[0][0]
__________________________________________________________________________________________________
batch_normalization_68 (BatchNo (None, None, None, 2 1024        conv2d_70[0][0]
__________________________________________________________________________________________________
leaky_re_lu_68 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_68[0][0]
__________________________________________________________________________________________________
conv2d_71 (Conv2D)              (None, None, None, 1 32768       leaky_re_lu_68[0][0]
__________________________________________________________________________________________________
batch_normalization_69 (BatchNo (None, None, None, 1 512         conv2d_71[0][0]
__________________________________________________________________________________________________
leaky_re_lu_69 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_69[0][0]
__________________________________________________________________________________________________
conv2d_72 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_69[0][0]
__________________________________________________________________________________________________
batch_normalization_70 (BatchNo (None, None, None, 2 1024        conv2d_72[0][0]
__________________________________________________________________________________________________
leaky_re_lu_70 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_70[0][0]
__________________________________________________________________________________________________
conv2d_73 (Conv2D)              (None, None, None, 1 32768       leaky_re_lu_70[0][0]
__________________________________________________________________________________________________
batch_normalization_71 (BatchNo (None, None, None, 1 512         conv2d_73[0][0]
__________________________________________________________________________________________________
leaky_re_lu_71 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_71[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D)              (None, None, None, 1 4718592     leaky_re_lu_57[0][0]
__________________________________________________________________________________________________
conv2d_66 (Conv2D)              (None, None, None, 5 1179648     leaky_re_lu_64[0][0]
__________________________________________________________________________________________________
conv2d_74 (Conv2D)              (None, None, None, 2 294912      leaky_re_lu_71[0][0]
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, None, None, 1 4096        conv2d_58[0][0]
__________________________________________________________________________________________________
batch_normalization_65 (BatchNo (None, None, None, 5 2048        conv2d_66[0][0]
__________________________________________________________________________________________________
batch_normalization_72 (BatchNo (None, None, None, 2 1024        conv2d_74[0][0]
__________________________________________________________________________________________________
leaky_re_lu_58 (LeakyReLU)      (None, None, None, 1 0           batch_normalization_58[0][0]
__________________________________________________________________________________________________
leaky_re_lu_65 (LeakyReLU)      (None, None, None, 5 0           batch_normalization_65[0][0]
__________________________________________________________________________________________________
leaky_re_lu_72 (LeakyReLU)      (None, None, None, 2 0           batch_normalization_72[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D)              (None, None, None, 2 261375      leaky_re_lu_58[0][0]
__________________________________________________________________________________________________
conv2d_67 (Conv2D)              (None, None, None, 2 130815      leaky_re_lu_65[0][0]
__________________________________________________________________________________________________
conv2d_75 (Conv2D)              (None, None, None, 2 65535       leaky_re_lu_72[0][0]
==================================================================================================
Total params: 62,001,757
Trainable params: 61,949,149
Non-trainable params: 52,608
__________________________________________________________________________________________________
None
Saved Keras model to model_data/yolo.h5
Read 62001757 of 62001757.0 from Darknet weights.
  1. convert it to coreml model. h5_coreml_full.py is the very same thing as your script https://github.com/Ma-Dan/YOLOv3-CoreML/blob/master/Convert/coreml.py, only with the input file moved to command argument.
$ python h5_coreml_full.py model_data/yolo.h5
WARNING:root:Keras version 2.1.5 detected. Last version known to be fully compatible of Keras is 2.1.3 .
WARNING:root:TensorFlow version 1.6.0 detected. Last version known to be fully compatible is 1.5.0 .
2018-06-18 10:04:56.709285: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/keras/models.py:255: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
  warnings.warn('No training configuration found in save file: '
0 : input_1, <keras.engine.topology.InputLayer object at 0x7f98b1961b70>
1 : conv2d_1, <keras.layers.convolutional.Conv2D object at 0x7f98b1961be0>
2 : batch_normalization_1, <keras.layers.normalization.BatchNormalization object at 0x7f98b1961ef0>
3 : leaky_re_lu_1, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1961eb8>
4 : zero_padding2d_1, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18f9278>
5 : conv2d_2, <keras.layers.convolutional.Conv2D object at 0x7f98b18f92e8>
6 : batch_normalization_2, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9470>
7 : leaky_re_lu_2, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f95c0>
8 : conv2d_3, <keras.layers.convolutional.Conv2D object at 0x7f98b18f95f8>
9 : batch_normalization_3, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9780>
10 : leaky_re_lu_3, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f98d0>
11 : conv2d_4, <keras.layers.convolutional.Conv2D object at 0x7f98b18f9908>
12 : batch_normalization_4, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9a90>
13 : leaky_re_lu_4, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f9be0>
14 : add_1, <keras.layers.merge.Add object at 0x7f98b18f9c18>
15 : zero_padding2d_2, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18f9c50>
16 : conv2d_5, <keras.layers.convolutional.Conv2D object at 0x7f98b18f9cc0>
17 : batch_normalization_5, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9e48>
18 : leaky_re_lu_5, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f9f98>
19 : conv2d_6, <keras.layers.convolutional.Conv2D object at 0x7f98b1961fd0>
20 : batch_normalization_6, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ff198>
21 : leaky_re_lu_6, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ff2e8>
22 : conv2d_7, <keras.layers.convolutional.Conv2D object at 0x7f98b18ff320>
23 : batch_normalization_7, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ff4a8>
24 : leaky_re_lu_7, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ff5f8>
25 : add_2, <keras.layers.merge.Add object at 0x7f98b18ff630>
26 : conv2d_8, <keras.layers.convolutional.Conv2D object at 0x7f98b18ff668>
27 : batch_normalization_8, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ff7f0>
28 : leaky_re_lu_8, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ff940>
29 : conv2d_9, <keras.layers.convolutional.Conv2D object at 0x7f98b18ff978>
30 : batch_normalization_9, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ffb00>
31 : leaky_re_lu_9, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ffc50>
32 : add_3, <keras.layers.merge.Add object at 0x7f98b18ffc88>
33 : zero_padding2d_3, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18ffcc0>
34 : conv2d_10, <keras.layers.convolutional.Conv2D object at 0x7f98b18ffd30>
35 : batch_normalization_10, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ffeb8>
36 : leaky_re_lu_10, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f9fd0>
37 : conv2d_11, <keras.layers.convolutional.Conv2D object at 0x7f98b190d080>
38 : batch_normalization_11, <keras.layers.normalization.BatchNormalization object at 0x7f98b190d208>
39 : leaky_re_lu_11, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190d358>
40 : conv2d_12, <keras.layers.convolutional.Conv2D object at 0x7f98b190d390>
41 : batch_normalization_12, <keras.layers.normalization.BatchNormalization object at 0x7f98b190d518>
42 : leaky_re_lu_12, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190d668>
43 : add_4, <keras.layers.merge.Add object at 0x7f98b190d6a0>
44 : conv2d_13, <keras.layers.convolutional.Conv2D object at 0x7f98b190d6d8>
45 : batch_normalization_13, <keras.layers.normalization.BatchNormalization object at 0x7f98b190d860>
46 : leaky_re_lu_13, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190d9b0>
47 : conv2d_14, <keras.layers.convolutional.Conv2D object at 0x7f98b190d9e8>
48 : batch_normalization_14, <keras.layers.normalization.BatchNormalization object at 0x7f98b190db70>
49 : leaky_re_lu_14, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190dcc0>
50 : add_5, <keras.layers.merge.Add object at 0x7f98b190dcf8>
51 : conv2d_15, <keras.layers.convolutional.Conv2D object at 0x7f98b190dd30>
52 : batch_normalization_15, <keras.layers.normalization.BatchNormalization object at 0x7f98b190deb8>
53 : leaky_re_lu_15, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18fffd0>
54 : conv2d_16, <keras.layers.convolutional.Conv2D object at 0x7f98b1915080>
55 : batch_normalization_16, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915208>
56 : leaky_re_lu_16, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1915358>
57 : add_6, <keras.layers.merge.Add object at 0x7f98b1915390>
58 : conv2d_17, <keras.layers.convolutional.Conv2D object at 0x7f98b19153c8>
59 : batch_normalization_17, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915550>
60 : leaky_re_lu_17, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b19156a0>
61 : conv2d_18, <keras.layers.convolutional.Conv2D object at 0x7f98b19156d8>
62 : batch_normalization_18, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915860>
63 : leaky_re_lu_18, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b19159b0>
64 : add_7, <keras.layers.merge.Add object at 0x7f98b19159e8>
65 : conv2d_19, <keras.layers.convolutional.Conv2D object at 0x7f98b1915a20>
66 : batch_normalization_19, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915ba8>
67 : leaky_re_lu_19, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1915cf8>
68 : conv2d_20, <keras.layers.convolutional.Conv2D object at 0x7f98b1915d30>
69 : batch_normalization_20, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915eb8>
70 : leaky_re_lu_20, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190dfd0>
71 : add_8, <keras.layers.merge.Add object at 0x7f98b191c080>
72 : conv2d_21, <keras.layers.convolutional.Conv2D object at 0x7f98b191c0b8>
73 : batch_normalization_21, <keras.layers.normalization.BatchNormalization object at 0x7f98b191c240>
74 : leaky_re_lu_21, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191c390>
75 : conv2d_22, <keras.layers.convolutional.Conv2D object at 0x7f98b191c3c8>
76 : batch_normalization_22, <keras.layers.normalization.BatchNormalization object at 0x7f98b191c550>
77 : leaky_re_lu_22, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191c6a0>
78 : add_9, <keras.layers.merge.Add object at 0x7f98b191c6d8>
79 : conv2d_23, <keras.layers.convolutional.Conv2D object at 0x7f98b191c710>
80 : batch_normalization_23, <keras.layers.normalization.BatchNormalization object at 0x7f98b191c898>
81 : leaky_re_lu_23, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191c9e8>
82 : conv2d_24, <keras.layers.convolutional.Conv2D object at 0x7f98b191ca20>
83 : batch_normalization_24, <keras.layers.normalization.BatchNormalization object at 0x7f98b191cba8>
84 : leaky_re_lu_24, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191ccf8>
85 : add_10, <keras.layers.merge.Add object at 0x7f98b191cd30>
86 : conv2d_25, <keras.layers.convolutional.Conv2D object at 0x7f98b191cd68>
87 : batch_normalization_25, <keras.layers.normalization.BatchNormalization object at 0x7f98b191cef0>
88 : leaky_re_lu_25, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1915fd0>
89 : conv2d_26, <keras.layers.convolutional.Conv2D object at 0x7f98b18a50b8>
90 : batch_normalization_26, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5240>
91 : leaky_re_lu_26, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5390>
92 : add_11, <keras.layers.merge.Add object at 0x7f98b18a53c8>
93 : zero_padding2d_4, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18a5400>
94 : conv2d_27, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5470>
95 : batch_normalization_27, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a55f8>
96 : leaky_re_lu_27, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5748>
97 : conv2d_28, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5780>
98 : batch_normalization_28, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5908>
99 : leaky_re_lu_28, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5a58>
100 : conv2d_29, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5a90>
101 : batch_normalization_29, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5c18>
102 : leaky_re_lu_29, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5d68>
103 : add_12, <keras.layers.merge.Add object at 0x7f98b18a5da0>
104 : conv2d_30, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5dd8>
105 : batch_normalization_30, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5f60>
106 : leaky_re_lu_30, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191cfd0>
107 : conv2d_31, <keras.layers.convolutional.Conv2D object at 0x7f98b18ac128>
108 : batch_normalization_31, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ac2b0>
109 : leaky_re_lu_31, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ac400>
110 : add_13, <keras.layers.merge.Add object at 0x7f98b18ac438>
111 : conv2d_32, <keras.layers.convolutional.Conv2D object at 0x7f98b18ac470>
112 : batch_normalization_32, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ac5f8>
113 : leaky_re_lu_32, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ac748>
114 : conv2d_33, <keras.layers.convolutional.Conv2D object at 0x7f98b18ac780>
115 : batch_normalization_33, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ac908>
116 : leaky_re_lu_33, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18aca58>
117 : add_14, <keras.layers.merge.Add object at 0x7f98b18aca90>
118 : conv2d_34, <keras.layers.convolutional.Conv2D object at 0x7f98b18acac8>
119 : batch_normalization_34, <keras.layers.normalization.BatchNormalization object at 0x7f98b18acc50>
120 : leaky_re_lu_34, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18acda0>
121 : conv2d_35, <keras.layers.convolutional.Conv2D object at 0x7f98b18acdd8>
122 : batch_normalization_35, <keras.layers.normalization.BatchNormalization object at 0x7f98b18acf60>
123 : leaky_re_lu_35, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5fd0>
124 : add_15, <keras.layers.merge.Add object at 0x7f98b18b4128>
125 : conv2d_36, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4160>
126 : batch_normalization_36, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b42e8>
127 : leaky_re_lu_36, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4438>
128 : conv2d_37, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4470>
129 : batch_normalization_37, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b45f8>
130 : leaky_re_lu_37, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4748>
131 : add_16, <keras.layers.merge.Add object at 0x7f98b18b4780>
132 : conv2d_38, <keras.layers.convolutional.Conv2D object at 0x7f98b18b47b8>
133 : batch_normalization_38, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b4940>
134 : leaky_re_lu_38, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4a90>
135 : conv2d_39, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4ac8>
136 : batch_normalization_39, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b4c50>
137 : leaky_re_lu_39, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4da0>
138 : add_17, <keras.layers.merge.Add object at 0x7f98b18b4dd8>
139 : conv2d_40, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4e10>
140 : batch_normalization_40, <keras.layers.normalization.BatchNormalization object at 0x7f98b18acfd0>
141 : leaky_re_lu_40, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bd128>
142 : conv2d_41, <keras.layers.convolutional.Conv2D object at 0x7f98b18bd160>
143 : batch_normalization_41, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bd2e8>
144 : leaky_re_lu_41, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bd438>
145 : add_18, <keras.layers.merge.Add object at 0x7f98b18bd470>
146 : conv2d_42, <keras.layers.convolutional.Conv2D object at 0x7f98b18bd4a8>
147 : batch_normalization_42, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bd630>
148 : leaky_re_lu_42, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bd780>
149 : conv2d_43, <keras.layers.convolutional.Conv2D object at 0x7f98b18bd7b8>
150 : batch_normalization_43, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bd940>
151 : leaky_re_lu_43, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bda90>
152 : add_19, <keras.layers.merge.Add object at 0x7f98b18bdac8>
153 : zero_padding2d_5, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18bdb00>
154 : conv2d_44, <keras.layers.convolutional.Conv2D object at 0x7f98b18bdb70>
155 : batch_normalization_44, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bdcf8>
156 : leaky_re_lu_44, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bde48>
157 : conv2d_45, <keras.layers.convolutional.Conv2D object at 0x7f98b18bde80>
158 : batch_normalization_45, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b4f98>
159 : leaky_re_lu_45, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c4198>
160 : conv2d_46, <keras.layers.convolutional.Conv2D object at 0x7f98b18c41d0>
161 : batch_normalization_46, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c4358>
162 : leaky_re_lu_46, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c44a8>
163 : add_20, <keras.layers.merge.Add object at 0x7f98b18c44e0>
164 : conv2d_47, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4518>
165 : batch_normalization_47, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c46a0>
166 : leaky_re_lu_47, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c47f0>
167 : conv2d_48, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4828>
168 : batch_normalization_48, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c49b0>
169 : leaky_re_lu_48, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c4b00>
170 : add_21, <keras.layers.merge.Add object at 0x7f98b18c4b38>
171 : conv2d_49, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4b70>
172 : batch_normalization_49, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c4cf8>
173 : leaky_re_lu_49, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c4e48>
174 : conv2d_50, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4e80>
175 : batch_normalization_50, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bdfd0>
176 : leaky_re_lu_50, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cd198>
177 : add_22, <keras.layers.merge.Add object at 0x7f98b18cd1d0>
178 : conv2d_51, <keras.layers.convolutional.Conv2D object at 0x7f98b18cd208>
179 : batch_normalization_51, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cd390>
180 : leaky_re_lu_51, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cd4e0>
181 : conv2d_52, <keras.layers.convolutional.Conv2D object at 0x7f98b18cd518>
182 : batch_normalization_52, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cd6a0>
183 : leaky_re_lu_52, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cd7f0>
184 : add_23, <keras.layers.merge.Add object at 0x7f98b18cd828>
185 : conv2d_53, <keras.layers.convolutional.Conv2D object at 0x7f98b18cd860>
186 : batch_normalization_53, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cd9e8>
187 : leaky_re_lu_53, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cdb38>
188 : conv2d_54, <keras.layers.convolutional.Conv2D object at 0x7f98b18cdb70>
189 : batch_normalization_54, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cdcf8>
190 : leaky_re_lu_54, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cde48>
191 : conv2d_55, <keras.layers.convolutional.Conv2D object at 0x7f98b18cde80>
192 : batch_normalization_55, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c4fd0>
193 : leaky_re_lu_55, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d4198>
194 : conv2d_56, <keras.layers.convolutional.Conv2D object at 0x7f98b18d41d0>
195 : batch_normalization_56, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4358>
196 : leaky_re_lu_56, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d44a8>
197 : conv2d_57, <keras.layers.convolutional.Conv2D object at 0x7f98b18d44e0>
198 : batch_normalization_57, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4668>
199 : leaky_re_lu_57, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d47b8>
200 : conv2d_60, <keras.layers.convolutional.Conv2D object at 0x7f98b18d47f0>
201 : batch_normalization_59, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4978>
202 : leaky_re_lu_59, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d4ac8>
203 : up_sampling2d_1, <keras.layers.convolutional.UpSampling2D object at 0x7f98b18d4b00>
204 : concatenate_1, <keras.layers.merge.Concatenate object at 0x7f98b18d4b70>
205 : conv2d_61, <keras.layers.convolutional.Conv2D object at 0x7f98b18d4ba8>
206 : batch_normalization_60, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4d30>
207 : leaky_re_lu_60, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d4e80>
208 : conv2d_62, <keras.layers.convolutional.Conv2D object at 0x7f98b18d4eb8>
209 : batch_normalization_61, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cdfd0>
210 : leaky_re_lu_61, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dc1d0>
211 : conv2d_63, <keras.layers.convolutional.Conv2D object at 0x7f98b18dc208>
212 : batch_normalization_62, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dc390>
213 : leaky_re_lu_62, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dc4e0>
214 : conv2d_64, <keras.layers.convolutional.Conv2D object at 0x7f98b18dc518>
215 : batch_normalization_63, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dc6a0>
216 : leaky_re_lu_63, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dc7f0>
217 : conv2d_65, <keras.layers.convolutional.Conv2D object at 0x7f98b18dc828>
218 : batch_normalization_64, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dc9b0>
219 : leaky_re_lu_64, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dcb00>
220 : conv2d_68, <keras.layers.convolutional.Conv2D object at 0x7f98b18dcb38>
221 : batch_normalization_66, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dccc0>
222 : leaky_re_lu_66, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dce10>
223 : up_sampling2d_2, <keras.layers.convolutional.UpSampling2D object at 0x7f98b18dce48>
224 : concatenate_2, <keras.layers.merge.Concatenate object at 0x7f98b18dceb8>
225 : conv2d_69, <keras.layers.convolutional.Conv2D object at 0x7f98b18dcef0>
226 : batch_normalization_67, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4f60>
227 : leaky_re_lu_67, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864208>
228 : conv2d_70, <keras.layers.convolutional.Conv2D object at 0x7f98b1864240>
229 : batch_normalization_68, <keras.layers.normalization.BatchNormalization object at 0x7f98b18643c8>
230 : leaky_re_lu_68, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864518>
231 : conv2d_71, <keras.layers.convolutional.Conv2D object at 0x7f98b1864550>
232 : batch_normalization_69, <keras.layers.normalization.BatchNormalization object at 0x7f98b18646d8>
233 : leaky_re_lu_69, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864828>
234 : conv2d_72, <keras.layers.convolutional.Conv2D object at 0x7f98b1864860>
235 : batch_normalization_70, <keras.layers.normalization.BatchNormalization object at 0x7f98b18649e8>
236 : leaky_re_lu_70, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864b38>
237 : conv2d_73, <keras.layers.convolutional.Conv2D object at 0x7f98b1864b70>
238 : batch_normalization_71, <keras.layers.normalization.BatchNormalization object at 0x7f98b1864cf8>
239 : leaky_re_lu_71, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864e48>
240 : conv2d_58, <keras.layers.convolutional.Conv2D object at 0x7f98b1864e80>
241 : conv2d_66, <keras.layers.convolutional.Conv2D object at 0x7f98b18dcf98>
242 : conv2d_74, <keras.layers.convolutional.Conv2D object at 0x7f98b186a208>
243 : batch_normalization_58, <keras.layers.normalization.BatchNormalization object at 0x7f98b186a3c8>
244 : batch_normalization_65, <keras.layers.normalization.BatchNormalization object at 0x7f98b186a518>
245 : batch_normalization_72, <keras.layers.normalization.BatchNormalization object at 0x7f98b186a630>
246 : leaky_re_lu_58, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b186a748>
247 : leaky_re_lu_65, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b186a780>
248 : leaky_re_lu_72, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b186a7b8>
249 : conv2d_59, <keras.layers.convolutional.Conv2D object at 0x7f98b186a7f0>
250 : conv2d_67, <keras.layers.convolutional.Conv2D object at 0x7f98b186a978>
251 : conv2d_75, <keras.layers.convolutional.Conv2D object at 0x7f98b186ab38>
Traceback (most recent call last):
  File "h5_coreml_full.py", line 5, in <module>
    image_input_names='input1', output_names=['output1', 'output2', 'output3'], image_scale=1/255.)
  File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/converters/keras/_keras_converter.py", line 745, in convert
    custom_conversion_functions=custom_conversion_functions)
  File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/converters/keras/_keras_converter.py", line 543, in convertToSpec
    custom_objects=custom_objects)
  File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/converters/keras/_keras2_converter.py", line 350, in _convert
    image_scale = image_scale)
  File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/models/neural_network.py", line 2542, in set_pre_processing_parameters
    channels, height, width = array_shape
ValueError: not enough values to unpack (expected 3, got 1)

Please help and let me know how to get this resolved.

my environment(which is tested/required in keras-yolo3):

virtualenv -p /usr/bin/python36 coremltools
source coremltools/bin/activate
pip install keras==2.1.5
tensorflow==1.6.0
pip install -U coremltools
pip install h5py

after training the pertrained model I am not able to convert it using your script.

hey! thanks!
I successfully trained my first model, but after that I am not able to convert it using:

" import coremltools

coreml_model = coremltools.converters.keras.convert('model_data/trained_weights_final.h5', input_names='input1', image_input_names='input1', output_names=['output1', 'output2', 'output3'], image_scale=1/255.)

coreml_model.input_description['input1'] = 'Input image'
coreml_model.output_description['output1'] = 'The 13x13 grid (Scale1)'
coreml_model.output_description['output2'] = 'The 26x26 grid (Scale2)'
coreml_model.output_description['output3'] = 'The 52x52 grid (Scale3)'

coreml_model.author = 'asdas'
coreml_model.license = 'BlaBla'
coreml_model.short_description = "erster YoloVersuch :D"

coreml_model.save('Yolov3.mlmodel')
"

i am able to convert the pertrained model, but after training that model the converter says :
" python3 converth5ToCoreML.py

Traceback (most recent call last):

File "converth5ToCoreML.py", line 3, in

coreml_model = coremltools.converters.keras.convert('/Users/robinsonhus0/Desktop/neuer/logs/000/trained_weights_final.h5', input_names='input1', image_input_names='input1', output_names=['output1', 'output2', 'output3'], image_scale=1/255.)

File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 793, in convert

respect_trainable=respect_trainable)

File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 579, in convertToSpec

respect_trainable=respect_trainable)

File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/coremltools/converters/keras/_keras2_converter.py", line 311, in _convert

model = _keras.models.load_model(model, custom_objects = custom_objects)

File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/saving.py", line 419, in load_model

model = _deserialize_model(f, custom_objects, compile)

File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/saving.py", line 221, in _deserialize_model

model_config = f['model_config']

File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/utils/io_utils.py", line 302, in getitem

raise ValueError('Cannot create group in read only mode.')

ValueError: Cannot create group in read only mode."

any idea? thank you !

using pytorch-converted coreml model

Thank you for this great codebase! I was able to get the Yolov3 demo working on my device. The performance of that is great!

When I try to drop in a PyTorch-converted CoreML model (following this repo for PyTorch training and ONNX conversion: https://github.com/ultralytics/yolov3), the app either crashes or remains open, but doesn't seem to detect anything. In fact, I'm not sure it's reaching the point of even trying to detect anything, as it's not displaying any FPS info at the bottom of the screen. Has anyone else experienced this issue? Any advice would be appreciated.

scaleFit instead of scaleFill

Thank you for the great codebase! I was able to get a PyTorch-trained version of Yolov3 (with spatial pyramid pooling) to run on an ipad pro at 26fps with this codebase.

I don't have an iOS/swift development background, and I wonder if you could point me in the right direction for modifying something with the code. My original model was trained with scaleFit, where the long side is set to 416 pixels, and the short side is padded. I'm trying to change the image scaling to match training. To that end, I found in ViewController.swift this relevant block:

// NOTE: If you choose another crop/scale option, then you must also
// change how the BoundingBox objects get scaled when they are drawn.
// Currently they assume the full input image is used.
request.imageCropAndScaleOption = .scaleFill

I want to change this to .scaleFit, but I don't understand how to make the change for the BoundingBox scaling when drawn, as suggested by the comment. Could you possibly point me in the right direction?

Team error on Xcode

Getting a build error on xcode saying no account for team.

Is it settings change on my side or is there a team developer account attached to the project ?

Please let me know how to change the settings.

The error is No account for team "LXGRKDxxxx". Add a new account in the Accounts preference pane or verify that your accounts have valid credentials.

Also a code signing error (i guess this is because I haven't updated iOS firmware)

Question:Custom Model detection

Dear Author

I am trying to convert the pretrain-custom model(960x960, 22classes)
Convert was succeeded and detect with Keras model correctly.
Then convert to mlmodel is also seems to be success with following.
In Xcode,
Input :Image (Color 960 × 960)
Output3:MultiArray (Float32 81 × 120 × 120)
Output2:MultiArray (Float32 81 × 60 × 60)
Output1:MultiArray (Float32 81 × 30 × 30)
And I changed the following static data in Swift, and label data
**
public static let inputWidth = 960
public static let inputHeight = 960

let numClasses = 22

let gridHeight = [30, 60, 120]
let gridWidth = [30, 60, 120]
**
After running the program , no detection happened.Actually,
Tracing the data of tc
let tc = Float(featurePointer[offset(channel + 4, cx, cy)])
always tc value is negative value, so after sigmoid conversion , mostly zero.
then no detection as finally.
Are there any missing points to run the custom model?

Detection in Image and not video

Is there any way to do detection in image and not in video. By image I mean capture image using iphone camera and then pass that image through model and display result in imageview. Is this possible?

Thanks

Converting YOLOV3-Tiny

I was able to successfully convert YoloV3. But what about YoloV3-Tiny? How should I change the convert.py?

Cannot convert to CoreML

I used this https://github.com/qqwweee/keras-yolo3 to convert my model trained using Darknet to a Keras model.
I'm trying to convert it to CoreML but get errors. I think it would be very good if you could specify how your virtualenv looks like when you do this @Ma-Dan

The error I get:

tf.estimator package not installed.
tf.estimator package not installed.
Traceback (most recent call last):
File "coreml.py", line 3, in
coreml_model = coremltools.converters.keras.convert('Doors.h5', input_names='input1', image_input_names='input1', output_names=['output1', 'output2', 'output3'], image_scale=1/255.)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 760, in convert
custom_conversion_functions=custom_conversion_functions)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 537, in convertToSpec
custom_objects=custom_objects)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 168, in _convert
model = _keras.models.load_model(model, custom_objects = custom_objects)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/keras/models.py", line 142, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/keras/models.py", line 193, in model_from_config
return layer_from_config(config, custom_objects=custom_objects)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/keras/utils/layer_utils.py", line 40, in layer_from_config
custom_objects=custom_objects)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/keras/engine/topology.py", line 2582, in from_config
process_layer(layer_data)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/keras/engine/topology.py", line 2560, in process_layer
custom_objects=custom_objects)
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/keras/utils/layer_utils.py", line 42, in layer_from_config
return layer_class.from_config(config['config'])
File "/home/jakup/Utilities/coreml/local/lib/python2.7/site-packages/keras/engine/topology.py", line 1025, in from_config
return cls(**config)
TypeError: init() got an unexpected keyword argument 'dtype'

Bug while running iphone app.

Your demo mlmodel file worked, but a model I created from raw data got an error. Does anyone solve the problem?

[error]
Screen Shot 2019-08-14 at 3 23 08

classes[c] = Float(featurePointer[offset(channel + 5 + c, cx, cy)])
                    }
2019-08-14 03:12:32.375783+0900 YOLOv3-CoreML[1032:231792] [MC] System group container for systemgroup.com.apple.configurationprofiles path is /private/var/containers/Shared/SystemGroup/systemgroup.com.apple.configurationprofiles
2019-08-14 03:12:32.384252+0900 YOLOv3-CoreML[1032:231792] [MC] Reading from public effective user settings.
YOLOv3-CoreML was compiled with optimization - stepping may behave oddly; variables may not be available.
(lldb) 

[environment]
python 3.6.5
tensorflow 1.7.0
h5py 2.7.1
Keras 2.1.6
coremltools 0.8

[process]

  1. I have created h5 file using repository below.
    https://github.com/qqwweee/keras-yolo3
    I have modified convert.py as below
input_layer = Input(shape=(416, 416, 3))
  1. I converted h5 file to mlmodel and ran ios app (Xcode 10.3, iphone6s), and worked but nothing detected and got error above.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.