GithubHelp home page GithubHelp logo

hasnainraz / fast-srgan Goto Github PK

View Code? Open in Web Editor NEW
638.0 18.0 115.0 1.35 MB

A Fast Deep Learning Model to Upsample Low Resolution Videos to High Resolution at 30fps

License: MIT License

Python 100.00%
sisr single-image-super-resolution super-resolution srgan fastsrgan realtime-super-resolution tensorflow tf2 tf-keras gans

fast-srgan's Introduction

Fast-SRGAN

The goal of this repository is to enable real time super resolution for upsampling low resolution videos. Currently, the design follows the SR-GAN architecture. But instead of residual blocks, inverted residual blocks are employed for parameter efficiency and fast operation. This idea is somewhat inspired by Real time image enhancement GANs.

The training setup looks like the following diagram:

Speed Benchmarks

The following runtimes/fps are obtained by averaging runtimes over 800 frames. Measured on a GTX 1080.

Input Image Size Output Size Time (s) FPS
128x128 512x512 0.019 52
256x256 1024x1024 0.034 30
384x384 1536x1536 0.068 15

We see it's possible to upsample to 720p at around 30fps.

Requirements

This was tested on Python 3.7. To install the required packages, use the provided requirements.txt file like so:

pip install -r requirements.txt

Pre-trained Model

A pretrained generator model on the DIV2k dataset is provided in the 'models' directory. It uses 6 inverted residual blocks, with 32 filters in every layer of the generator.

Upsampling is done via phase shifts in the low resolution space for speed.

To try out the provided pretrained model on your own images, run the following:

python infer.py --image_dir 'path/to/your/image/directory' --output_dir 'path/to/save/super/resolution/images'

Training

To train, simply execute the following command in your terminal:

python main.py --image_dir 'path/to/image/directory' --hr_size 384 --lr 1e-4 --save_iter 200 --epochs 10 --batch_size 14

Model checkpoints and training summaries are saved in tensorboard. To monitor training progress, open up tensorboard by pointing it to the 'logs' directory that will created when you start training.

Samples

Following are some results from the provided trained model. Left shows the low res image, after 4x bicubic upsampling. Middle is the output of the model. Right is the actual high resolution image.

384x384 to 1536x1536 Upsampling 256x256 to 1024x1024 Upsampling 128x128 to 512x512 Upsampling

Extreme Super Resolution

Upsampling HQ images 4x as a check to see the image is not destroyed (since the network is trained on low quality, it should also upsample high quality images while preserving their quality).

Changing Input Size

The provided model was trained on 384x384 inputs, but to run it on inputs of arbitrary size, you'll have to change the input shape like so:

from tensorflow import keras

# Load the model
model = keras.models.load_model('models/generator.h5')

# Define arbitrary spatial dims, and 3 channels.
inputs = keras.Input((None, None, 3))

# Trace out the graph using the input:
outputs = model(inputs)

# Override the model:
model = keras.models.Model(inputs, outputs)

# Now you are free to predict on images of any size.

Contributing

If you have ideas on improving model performance, adding metrics, or any other changes, please make a pull request or open an issue. I'd be happy to accept any contributions.

fast-srgan's People

Contributors

dependabot[bot] avatar hasnainraz avatar wenheli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast-srgan's Issues

ImportError: cannot import name 'DataLoaderInfer'

i am trying to run inference on my own images but keep getting this error. any suggestions?
ImportError: cannot import name 'DataLoaderInfer
there is no class or function in dataloader.py of that name. may i am wrong , kindly correct...

infer.py not working in the same dependencies...

Stuck at this for a while...W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 491520000 exceeds 10% of system memory.

Btw, when I train my own data, do I need to have the high resolution and low resolution data and put them together in the same folder?

Training requirements?

Interesting repo. Can you please provide more details about the training process? Like, what data/images do we need for training? Is it only some images? For instance, I want to train on CG images.

python main.py --image_dir 'path/to/image/directory' --hr_size 384 --lr 1e-4 --save_iter 200 --epochs 10 --batch_size 14

Running the model in Real time with webcam

hi @HasnainRaz , the information and the model you provide is really fantastic, could you please guide me or provide me some instructions on how to run this model on webcam, or any better way to run on a real time device, i am trying to implement this on 2070 super gpu

logs

could you please provide the logs folder so that we can view the training statistics in the tensorboard

Image Quality Not That Great

The output images don't seem to have the same quality as some other SR-GAN Models. In particular...when you zoom in you can see blurry/overly smooth pixels. Have you tried doing this on ESRGAN...maybe that will lead to better results?

Reproducing high FPS

Thank you for this repository!

I'm trying to reproduce the FPS you wrote, I'm using 184x184 images, your pretrained model and the infer.py script. The time of inference should be <0.034s, but what I get is a consistent 0.4s.
I am using a worse graphics card, a GeForce 980, but I don't think that the decrease in time should be such drastic.

Increasing batch size doesn't work either, I still get 0.4s per image.

What am I doing wrong? What can I do to speedup inference?

Issue during Training

Hi, I used your latest code and trained the model with Div2K.
However, the results are not as good. I noticed that the losses keep fluctuating.
I used the command as follows
python main.py --image_dir ./DIV2K_train_HR --hr_size 384 --lr 1e-4 --save_iter 200 --epochs 30 --batch_size 14
These are the log file after 30 epochs.
[EP1] Adversarial Loss=0.080617 Content Loss=0.001997 MSE Loss=0.026345 Discriminator Loss=0.781968
[EP2] Adversarial Loss=0.132610 Content Loss=0.001199 MSE Loss=0.035348 Discriminator Loss=0.905672
[EP3] Adversarial Loss=0.123783 Content Loss=0.001260 MSE Loss=0.022111 Discriminator Loss=0.938514
[EP4] Adversarial Loss=0.097441 Content Loss=0.001445 MSE Loss=0.015596 Discriminator Loss=1.121814
[EP5] Adversarial Loss=0.122653 Content Loss=0.001467 MSE Loss=0.019071 Discriminator Loss=0.786843
[EP6] Adversarial Loss=0.092276 Content Loss=0.001092 MSE Loss=0.023201 Discriminator Loss=0.901218
[EP7] Adversarial Loss=0.102440 Content Loss=0.001587 MSE Loss=0.018378 Discriminator Loss=0.702091
[EP8] Adversarial Loss=0.110358 Content Loss=0.001532 MSE Loss=0.026881 Discriminator Loss=0.590455
[EP9] Adversarial Loss=0.106875 Content Loss=0.003184 MSE Loss=0.020608 Discriminator Loss=0.387678
[EP10] Adversarial Loss=0.084382 Content Loss=0.002820 MSE Loss=0.021133 Discriminator Loss=0.382028
[EP11] Adversarial Loss=0.077473 Content Loss=0.004879 MSE Loss=0.015751 Discriminator Loss=0.420842
[EP12] Adversarial Loss=0.059834 Content Loss=0.004527 MSE Loss=0.013836 Discriminator Loss=0.307745
[EP13] Adversarial Loss=0.106401 Content Loss=0.006681 MSE Loss=0.019882 Discriminator Loss=0.106354
[EP14] Adversarial Loss=0.084455 Content Loss=0.006933 MSE Loss=0.021794 Discriminator Loss=0.038181
[EP15] Adversarial Loss=0.146453 Content Loss=0.007463 MSE Loss=0.028703 Discriminator Loss=0.057300
[EP16] Adversarial Loss=0.108233 Content Loss=0.008538 MSE Loss=0.021102 Discriminator Loss=0.021846
[EP17] Adversarial Loss=0.096775 Content Loss=0.010486 MSE Loss=0.016221 Discriminator Loss=0.137891
[EP18] Adversarial Loss=0.107159 Content Loss=0.007517 MSE Loss=0.021911 Discriminator Loss=0.029221
[EP19] Adversarial Loss=0.088703 Content Loss=0.008759 MSE Loss=0.009813 Discriminator Loss=0.192226
[EP20] Adversarial Loss=0.077869 Content Loss=0.010124 MSE Loss=0.018487 Discriminator Loss=0.073865
[EP21] Adversarial Loss=0.108033 Content Loss=0.008940 MSE Loss=0.020208 Discriminator Loss=0.020908
[EP22] Adversarial Loss=0.059458 Content Loss=0.007813 MSE Loss=0.018918 Discriminator Loss=0.023339
[EP23] Adversarial Loss=0.089555 Content Loss=0.004957 MSE Loss=0.022235 Discriminator Loss=0.076594
[EP24] Adversarial Loss=0.128833 Content Loss=0.009217 MSE Loss=0.023711 Discriminator Loss=0.017186
[EP25] Adversarial Loss=0.092714 Content Loss=0.007943 MSE Loss=0.028336 Discriminator Loss=0.020029
[EP26] Adversarial Loss=0.110232 Content Loss=0.007033 MSE Loss=0.019300 Discriminator Loss=0.024588
[EP27] Adversarial Loss=0.081943 Content Loss=0.006822 MSE Loss=0.022644 Discriminator Loss=0.024508
[EP28] Adversarial Loss=0.109066 Content Loss=0.008239 MSE Loss=0.027397 Discriminator Loss=0.017921
[EP29] Adversarial Loss=0.096089 Content Loss=0.007800 MSE Loss=0.028556 Discriminator Loss=0.009476
[EP30] Adversarial Loss=0.093064 Content Loss=0.008842 MSE Loss=0.019332 Discriminator Loss=0.023928

This is an example of training image after 30 epochs.
Generated Image
generated
HR image:
HR
LR image:
LR

I saw the results are not very different from bilinear upsampling and still far from your provided pretrained model. Do you have any recommendation to fix this issue?
Thanks

SRGAN beginner

Hello Hasnainraz,
It's really cool to see your SRGAN work. Could you link the dataset source and minimum requirement of GPU series to achieve the final model like you?
It would be a huge help for beginners, including me.

feeding Low Res images to Generator without downsampling High Res images

Hi Hasnain,
Thanks for the great code, i have triained it and tested on custom images and it works very well and it is very computationally efficient compared to other SRGAN models i have used,
For my project i need to feed the Generator with low resolution images separately (from a separate directory), without downsampling the high res images. How should i change the file Dataloader.py
any help would be appreciated,
i tried something and i have a code snippet as an example of how to load low res images using a generator function but it gives an error.
loading images separately

More About Trained Model

Hi. Thanks for this nice work.

I need some details about the pre-trained model. Could you give some detail about generator.h5 file?
How many images did you use? Which pre-process operation applied?
How much time does it take to train?
Is that the final version? Can we use it directly on our projects?

Incompatible shape

Good morning,
thank you for the great work.
I am trying to run my code on my own image.
To change input size I run the code you suggested:

from tensorflow import keras

Load the model

model = keras.models.load_model('models/generator.h5')

Define arbitrary spatial dims, and 3 channels.

inputs = keras.Input((None, None, 3))

Trace out the graph using the input:

outputs = model(inputs)

Override the model:

model = keras.models.Model(inputs, outputs)

Now you are free to predict on images of any size.

However I still have this error,
Input 0 of layer "model_2" is incompatible with the layer: expected shape=(None, 96, 96, 3), found shape=(None, 11, 11, 3)

same thing if I run the infer.py command.
Any suggestion how to solve it?

thank you very much

how to process sr in real time speed for developing video cam

I extract 30-second video into images with 60 fps (480x360). Then I try to run infer.py to get super resolution frames and get the frames combined into video again. The converting process is not that slow. But the inference processing really takes along time nearly 1 hour. I aim to develop models for real time super resolution webcam. I need to get the inference time as fast as video duration(30 second) or faster. What is your suggestions?

ValueError: Input 0 is incompatible with layer model_2: expected shape=(None, 96, 96, 3), found shape=(None, 480, 640, 3) from infer.py

First, thank you for your great work.
I have tried to enhance my images of shape 640x480 using the pretrained model, but infer.py returns an error:

ValueError: Input 0 is incompatible with layer model_2: expected shape=(None, 96, 96, 3), found shape=(None, 480, 640, 3)

The model works only if I change the input image to 384x384, which is the original value used when training the model.

I've read 'Changing Input Size' from README, but the modification seems to be already applied on infer.py.

Also, when I load the pretrained model, there is a warning:

WARNING:tensorflow:No training configuration found in the save file, so the model was not compiled. Compile it manually.

I'm using the latest version of tensorflow-gpu : 2.4.0.

Training parameters

Hi, thanks for the great work. I was just wondering what training parameters you used? How many epochs and what batch size did you use?

Models for higher resolution

Would highly appreciate if you can provide the model with higher resolution, I am in need of some projects

Thanks,
Steve

Question about training params

Hi, Thank you very much for your great work.

I tried to retrain the model using DIV2K data. However, after running about 100 epochs, I still see the checkboard artifacts in the generated images. Could you provide your training params of the pretrained model?

Thanks

ValueError: Exception encountered when calling layer "model" (type Functional)

Trying to run this model on Google Colab with a general-sized image (i.e., not 96x96) and I hit the above error. I have implemented the suggested code block to generalise to arbitrary sized images and ran the suggested code to apply the model. The traceback is given by:

Traceback (most recent call last):
  File "infer.py", line 50, in <module>
    main()
  File "infer.py", line 37, in main
    sr = model.predict(np.expand_dims(low_res, axis=0))[0]
  File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/tmp/__autograph_generated_filealp9pr7w.py", line 15, in tf__predict_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
ValueError: in user code:

    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1845, in predict_function  *
        return step_function(self, iterator)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1834, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1823, in run_step  **
        outputs = model.predict_step(data)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1791, in predict_step
        return self(x, training=False)
    File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
        raise ValueError(f'Input {input_index} of layer "{layer_name}" is '

    ValueError: Exception encountered when calling layer "model" (type Functional).
    
    Input 0 of layer "model_2" is incompatible with the layer: expected shape=(None, 96, 96, 3), found shape=(None, 1600, 1178, 3)
    
    Call arguments received by layer "model" (type Functional):
      • inputs=tf.Tensor(shape=(None, 1600, 1178, 3), dtype=float32)
      • training=False
      • mask=None

Easiest way to run inference from C/C++

I have images in memory in C++ that I would like to scale back to memory. Ideally the whole environment to do this would be easy to distribute too.

From what I've found, it seems like I could convert the model to Tensorflow .pb and then run inference in openCV C++.

Does that sound like the simplest approach? Wondering if anybody else did this already.

Thanks for the help!

Loading trained checkpoints

Hi Hasnain, thank you for great work. I'm training network with larger dataset and in your code you are save models for every two hundred iterations. When train ends should I load the latest generator checkpoint file or should I do something different?

Training for Higher Upsampling Levels

I have made some changes to the code following this issue's advice:
#14

However, when removing or adding upsampling layers to the model and attempting to train a new model from scratch the following error is shown:
ValueError: Dimensions must be equal, but are 384 and 768 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](functional_5/conv2d_12/Tanh, y)' with input shapes: [7,384,384,3], [7,768,768,3]

This indicates that the sizes and ratios are hardcoded elsewhere. Could you point out what other parameters will need to be updated? If yes, then I'd be happy to provide a pull request with the changes needed for arbitrary scaling factors in return once it's done.

Requirements

Hello
Can you write requirements for this project?
I trying to run this but got error:
File ".../infer.py", line 58, in <module> main() File ".../infer.py", line 27, in main model = keras.models.load_model('models/generator.h5') File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 137, in load_model return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 162, in load_model_from_hdf5 custom_objects=custom_objects) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config return deserialize(config, custom_objects=custom_objects) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/layers/serialization.py", line 90, in deserialize printable_module_name='layer') File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object list(custom_objects.items()))) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1123, in from_config process_layer(layer_data) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1107, in process_layer layer = deserialize_layer(layer_data, custom_objects=custom_objects) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/layers/serialization.py", line 90, in deserialize printable_module_name='layer') File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 194, in deserialize_keras_object return cls.from_config(cls_config) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 451, in from_config return cls(**config) File ".../penv/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 2417, in __init__ self.node_def = node_def_pb2.NodeDef.FromString(node_def) TypeError: a bytes-like object is required, not 'dict'

tensorflow-gpu - 2.0.0b1
Keras - 2.3.1

main.py: error: unrecognized arguments: --hr_image_size 384

Hey, thanks for sharing. Is this possible to run in colab environment? I am getting this error when I try to run the sample script. I cloned, cdd to directory, and placed and pointed my own image directory within.

I get this error when I try to run it on 14 images I have resized to 384x384

thanks again

!python /content/Fast-SRGAN/main.py --image_dir '/content/Fast-SRGAN/imagedir' --hr_image_size 384 --lr 1e-4 --save_iter 200 --epochs 10 --batch_size 14

usage: main.py [-h] [--image_dir IMAGE_DIR] [--batch_size BATCH_SIZE]
               [--epochs EPOCHS] [--hr_size HR_SIZE] [--lr LR]
               [--save_iter SAVE_ITER]
main.py: error: unrecognized arguments: --hr_image_size 384

pre-trained model D (Discriminator) not found

Hello Hasnain, good morning!

First of all, congratulations on the project. I have a doubt about the .h5 file of the pre-trained model of D (Discriminator), which I did not find in the models folder. Is there any reason for not having this file and only having the pre-trained model G (Generator)?
From what I understand, the GAN network only uses the pre-trained G model for test operation. Is there any particularity in this type of operation?

Thanks!

Way to make 2x instead of 4x?

Hi there,

I'm trying to figure out if there's any way of make it output 2x images instead of 4x to win even more time.

Thanks for your awesome work.

VGG19 for grayscale images

Dear Hasnain,
Thanks for a well developed code for SRGAN.
I want ot use it for grayscale images. but the problem is with the content loss, that VGG19 is trained on imagenet dataset which is not grayscale and vgg19 requires the sr and hr images to have 3 colour channels when used in SRGAN. Is there a way out for this? maybe i can use vgg19 but train it first with my own grayscale images. any help would be appreciated. also i would also use SRGAN for 3D images in near future, any help in that regard would be appreciated also.
Cheers,

OOM Error

I get OOM errors when an input image is bigger than about 1200 pixels each side (this varies by image for some reason). Can you help me understand how the model is breaking because of this? Is it the shape of the model or some other error and if we can configure it?

Thanks!

This is the error:

2019-12-03 10:39:21.705187: W tensorflow/core/common_runtime/bfc_allocator.cc:424] *_________________****************************__________***************************_________________
2019-12-03 10:39:21.705596: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at transpose_op.cc:198 : Resource exhausted: OOM when allocating tensor with shape[1,4800,4800,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "infer.py", line 50, in <module>
    main()
  File "infer.py", line 37, in main
    sr = model.predict(np.expand_dims(low_res, axis=0))[0]
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 908, in predict
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 723, in predict
    callbacks=callbacks)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 394, in model_iteration
    batch_outs = f(ins_batch)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/backend.py", line 3476, in __call__
    run_metadata=self.run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1472, in __call__
    run_metadata_ptr)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1,4800,4800,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[{{node model_2/p_re_lu_2/Relu_1-0-0-TransposeNCHWToNHWC-LayoutOptimizer}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[model_2/conv2d_13/Tanh/_743]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

  (1) Resource exhausted: OOM when allocating tensor with shape[1,4800,4800,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[{{node model_2/p_re_lu_2/Relu_1-0-0-TransposeNCHWToNHWC-LayoutOptimizer}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

Image was white until I made this change

Hello, just letting you know I needed to make this change to the inference code in order for the output to display properly. I upgraded a few of the dependencies to the latest versions and maybe that is why. Or possibly because I converted infer.py to use live webcam images. I am using:

Tensorflow 2.3.1
Python 3.8
OpenCV 4.40
Numpy 1.18.5

I needed to make one small change to the inference code, otherwise the image was just all white
` # Rescale values in range 0-255

sr = ((sr + 1.0) / 2.0) * 255.0

sr = sr.astype(np.uint8) # image is all white without this line

Convert back to BGR for opencv

sr = cv2.cvtColor(sr, cv2.COLOR_RGB2BGR)

`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.