GithubHelp home page GithubHelp logo

dutcode's People

Contributors

annbless avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dutcode's Issues

issue in loading difnet model

Traceback (most recent call last):
File "./scripts/DIFRINTStabilizer.py", line 105, in
fhat, I_int = DIFNet(fr_g1, fr_g3, fr_o2,
File "/home/nikesh/stabl/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", l
return forward_call(*input, **kwargs)
File "/home/nikesh/stabl/venv/lib/python3.8/site-packages/torch/nn/parallel/data_paralle
return self.module(*inputs[0], **kwargs[0])
File "/home/nikesh/stabl/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", l
return forward_call(*input, **kwargs)
File "/home/nikesh/stabl/DUTCode/models/DIFRINT/models.py", line 323, in forward
w1, flo1 = self.warpFrame(fs2, fr1, scale=scale)
File "/home/nikesh/stabl/DUTCode/models/DIFRINT/models.py", line 318, in warpFrame
flo = 20.0 * torch.nn.functional.interpolate(input=self.pwc(temp_fr_1, temp_fr_2), siz
File "/home/nikesh/stabl/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", l
return forward_call(*input, **kwargs)
File "/home/nikesh/stabl/DUTCode/models/DIFRINT/pwcNet.py", line 269, in forward
objectEstimate = self.moduleSix(tensorFirst[-1], tensorSecond[-1], None)
File "/home/nikesh/stabl/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", l
return forward_call(*input, **kwargs)
File "/home/nikesh/stabl/DUTCode/models/DIFRINT/pwcNet.py", line 197, in forward
tensorVolume = self.moduleCorreleaky(self.moduleCorrelation(tensorFirst, tensorSecond)
File "/home/nikesh/stabl/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", l
return forward_call(*input, **kwargs)
File "/home/nikesh/stabl/DUTCode/models/correlation/correlation.py", line 395, in forwar
return _FunctionCorrelation.apply(tenFirst, tenSecond)
File "/home/nikesh/stabl/DUTCode/models/correlation/correlation.py", line 286, in forwar
assert(first.is_contiguous() == True)
AssertionError
Stabiling using the DIFRINT model

Traceback (most recent call last):
File "./scripts/StabNetStabilizer.py", line 36, in
model.load_state_dict(r_model)
File "/home/nikesh/stabl/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", l
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for stabNet:
Missing key(s) in state_dict: "resnet50.resnet_v2_50_block1_unit_1_bottleneck_v2_p_bottleneck_v2_preact_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block1_unit_1_boet_v2_50_block1_unit_1_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_var", "resnet5m.running_mean", "resnet50.resnet_v2_50_block1_unit_1_bottleneck_v2_conv2_BatchNorm_FusedB2_preact_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block1_unit_2_bottleneck_v2__bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block1et50.resnet_v2_50_block1_unit_2_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_mean"edBatchNorm.running_var", "resnet50.resnet_v2_50_block1_unit_3_bottleneck_v2_preact_FusedBv2_preact_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block1_unit_3_bottleneck_v2_ck1_unit_3_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_an", "resnet50.resnet_v2_50_block1_unit_3_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.runedBatchNorm.running_mean", "resnet50.resnet_v2_50_block2_unit_1_bottleneck_v2_preact_Fusedv2_conv1_BatchNorm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block2_unit_1_bottv2_50_block2_unit_1_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_mean", "resnet50.running_var", "resnet50.resnet_v2_50_block2_unit_2_bottleneck_v2_preact_FusedBatchNorm.runsedBatchNorm.running_var", "resnet50.resnet_v2_50_block2_unit_2_bottleneck_v2_conv1_BatchNottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block2_un50.resnet_v2_50_block2_unit_2_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_var", "running_mean", "resnet50.resnet_v2_50_block2_unit_3_bottleneck_v2_preact_FusedBatchNorm.ruchNorm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block2_unit_3_bottleneck_v2_co_unit_3_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50, "resnet50.resnet_v2_50_block2_unit_4_bottleneck_v2_preact_FusedBatchNorm.running_mean", .running_var", "resnet50.resnet_v2_50_block2_unit_4_bottleneck_v2_conv1_BatchNorm_FusedBat_conv1_BatchNorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block2_unit_4_bottlen_50_block2_unit_4_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_var", "resnet50.res", "resnet50.resnet_v2_50_block3_unit_1_bottleneck_v2_preact_FusedBatchNorm.running_var", BatchNorm.running_mean", "resnet50.resnet_v2_50_block3_unit_1_bottleneck_v2_conv1_BatchNorleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block3_unitresnet_v2_50_block3_unit_2_bottleneck_v2_preact_FusedBatchNorm.running_mean", "resnet50.re", "resnet50.resnet_v2_50_block3_unit_2_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.runniNorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block3_unit_2_bottleneck_v2_conv2nit_2_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_bl.resnet_v2_50_block3_unit_3_bottleneck_v2_preact_FusedBatchNorm.running_var", "resnet50.renning_mean", "resnet50.resnet_v2_50_block3_unit_3_bottleneck_v2_conv1_BatchNorm_FusedBatchnv2_BatchNorm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block3_unit_3_bottlenec_block3_unit_4_bottleneck_v2_preact_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_b.resnet_v2_50_block3_unit_4_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_mean", "rtchNorm.running_var", "resnet50.resnet_v2_50_block3_unit_4_bottleneck_v2_conv2_BatchNorm_Fneck_v2_conv2_BatchNorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block3_unit_5_0_block3_unit_5_bottleneck_v2_preact_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_b "resnet50.resnet_v2_50_block3_unit_5_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.runningm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block3_unit_5_bottleneck_v2_conv2_B_6_bottleneck_v2_preact_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block3_unit_60_block3_unit_6_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_mean", "resnet50.resning_var", "resnet50.resnet_v2_50_block3_unit_6_bottleneck_v2_conv2_BatchNorm_FusedBatchNor2_BatchNorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block4_unit_1_bottleneck_vt_1_bottleneck_v2_preact_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block4_unit_1esnet_v2_50_block4_unit_1_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_var", "resnNorm.running_mean", "resnet50.resnet_v2_50_block4_unit_1_bottleneck_v2_conv2_BatchNorm_Fusk_v2_preact_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_block4_unit_2_bottleneck_t_2_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_mean", "resnet50.resnet_v2_50_bloesnet50.resnet_v2_50_block4_unit_2_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.running_meFusedBatchNorm.running_var", "resnet50.resnet_v2_50_block4_unit_3_bottleneck_v2_preact_Fusck_v2_preact_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_block4_unit_3_bottleneck_block4_unit_3_bottleneck_v2_conv1_BatchNorm_FusedBatchNorm.running_var", "resnet50.resnet__mean", "resnet50.resnet_v2_50_block4_unit_3_bottleneck_v2_conv2_BatchNorm_FusedBatchNorm.n", "resnet50.resnet_v2_50_postnorm_FusedBatchNorm.running_var", "resnet50.resnet_v2_50_

cupy 7.5.0 version

Problem with cupy 7.5.0 installation
This is the following error log:

ERROR: Could not find a version that satisfies the requirement cupy==7.5.0
ERROR: No matching distribution found for cupy==7.5.0

Can anybody help?

[REQ] add a (GH-compliant) license file

Hi there, 1st of all thanks for this awesome work !

Since we've 'doxed' it in our HyMPS project (under VIDEO section \ AI-based page \ Stabilizers), can you please add a GH-compliant license file for it ?

As you know, expliciting licensing terms is extremely important to let anyone better/faster understand how to reuse/adapt/modify sources (and not only) in other open projects and vice-versa.

Although it may sounds like a minor aspect, license file omission obviously causes an inconsistent generation of the relative badge too:


(badge-generator URL: https://badgen.net/github/license/Annbless/DUTCode)

You can easily set a standardized one through the GH's license wizard tool.

Last but not least, let us know how we could improve - in your opinion - our categorizations and links to resources in order to favor collaboration between developers (and therefore evolution) of listed projects.

Hope that helps/inspires !

Infinite running time

Hi,

I tried to run inference of your model following your instructions, I got stuck at

Directory exists
Stabiling using the DUT model

Namespace(InputBasePath='images/', MaxLength=1200, MotionProPath='ckpt/MotionPro.pth', OutNamePrefix='', OutputBasePath='results/', PWCNetPath='ckpt/network-default.pytorch', RFDetPath='ckpt/RFDet_640.pth.tar', Repeat=50, SingleHomo=False, SmootherPath='ckpt/smoother.pth')
-------------model configuration------------------------
using RFNet ...
using PWCNet for motion estimation...
using Motion Propagation model with multi homo...
using Deep Smoother Model...
------------------reload parameters-------------------------
reload Smoother params
successfully load 12 params for smoother
reload RFDet Model

any suggestions?

The pre-training model compression package is corrupted

The pre-training model compression package is corrupted. Specifically, the rfdet_640. path.tar file in the CKPT folder cannot be opened. I want to ask if this is normal. Finally, can you provide the download address of your data set?

Where is the supplementary video?

When I click on the link in the picture to view the supplementary video,but the website cannot be found.Could anyone help me, thanks a lot!
image
image

training code

Dear Annbless
Thanks for your great work for video stabilization! The result is amazing!
Could you release the training code for this project?
Thank you for an early answer!

Training detail for Smooth kernel

Thx for your awesome paper and code!

I am trying rewriting training code, however I have some question for my bad understanding QQ

  1. In "def generateSmooth"

    why temp_smooth with kernel and abskernel together? cuz in paper, there is no absolute for kernel

  2. My training result, kernel becomes zero and thus Lts become converging XDDD

    Is there any constraints for kernel in training?

Code is unusable due to missing models (sintel.pytorch and stabNet.pth)

So far, I am not seeing in the comments posted here that anyone has successfully run this code through to the end. I was able to make headway with an A100 40GB GPU and 600 2k images, however the code fails towards the end due to missing models. It is looking for both sintel.pytorch and stabNet.pth in the ckpt folder, both of which are not included via your download link. Please let us know where to find these models or if you would be able to update your archive to include them. Without them, it is not possible to run this code.

some wrong

Hello,

Thanks for your great work for video stabilization!

When I run the code using colab, I get the following error. How can I solve it?

V)~@NIFDV}H)LA3I5 CM_`S

Looking forward to your reply, thanks!

Google colab

Hello, I started ‘Run the three models on the provided unstable sequences and see the results’, and the following error occurred, how can I solve it?

`From https://github.com/Annbless/DUTCode

  • branch main -> FETCH_HEAD
    Already up to date.
    Stabiling using the DUT model

Traceback (most recent call last):
File "/content/DUTCode/models/DUT/PWCNet.py", line 14, in
from .correlation import correlation # the custom cost volume layer
ModuleNotFoundError: No module named 'models.DUT.correlation'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "./scripts/DUTStabilizer.py", line 11, in
from models.DUT.DUT import DUT
File "/content/DUTCode/models/DUT/DUT.py", line 12, in
from .PWCNet import Network as PWCNet
File "/content/DUTCode/models/DUT/PWCNet.py", line 19, in
import correlation # you should consider upgrading python
File "/content/DUTCode/models/correlation/correlation.py", line 273, in
@cupy.util.memoize(for_each_device=True)
File "/usr/local/lib/python3.7/dist-packages/cupy/init.py", line 875, in getattr
f"module 'cupy' has no attribute {name!r}")
AttributeError: module 'cupy' has no attribute 'util'
Stabiling using the DIFRINT model

Traceback (most recent call last):
File "/content/DUTCode/models/DIFRINT/pwcNet.py", line 17, in
from models.correlation import correlation # the custom cost volume layer
File "/content/DUTCode/models/correlation/correlation.py", line 273, in
@cupy.util.memoize(for_each_device=True)
File "/usr/local/lib/python3.7/dist-packages/cupy/init.py", line 875, in getattr
f"module 'cupy' has no attribute {name!r}")
AttributeError: module 'cupy' has no attribute 'util'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "./scripts/DIFRINTStabilizer.py", line 11, in
from models.DIFRINT.models import DIFNet2
File "/content/DUTCode/models/DIFRINT/models.py", line 6, in
from .pwcNet import PwcNet
File "/content/DUTCode/models/DIFRINT/pwcNet.py", line 22, in
import correlation # you should consider upgrading python
File "/content/DUTCode/models/correlation/correlation.py", line 273, in
@cupy.util.memoize(for_each_device=True)
File "/usr/local/lib/python3.7/dist-packages/cupy/init.py", line 875, in getattr
f"module 'cupy' has no attribute {name!r}")
AttributeError: module 'cupy' has no attribute 'util'
Stabiling using the DIFRINT model

inference with [1, 2, 4, 8, 16, 32]
totally 480 frames for stabilization
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
length: 10
fps=12.950710894327822
length: 20
fps=15.40031679614381
length: 30
fps=16.42688958201805
length: 40
fps=17.125610864624488
length: 50
fps=17.591498310390687
length: 60
fps=17.907956614992536
length: 70
fps=18.141366974465086
length: 80
fps=18.295022273905573
length: 90
fps=18.413580126169684
length: 100
fps=18.530250318733433
length: 110
fps=18.601868551162436
length: 120
fps=18.713303068323516
length: 130
fps=18.773824996850426
length: 140
fps=18.827789863414793
length: 150
fps=18.84869852294256
length: 160
fps=18.894238953017897
length: 170
fps=18.94839989998453
length: 180
fps=18.98515587712094
length: 190
fps=19.032914027050115
length: 200
fps=19.066544372029053
length: 210
fps=19.109932728253188
length: 220
fps=19.125262363111936
length: 230
fps=19.12811980772301
length: 240
fps=19.142416627959022
length: 250
fps=19.16557507573052
length: 260
fps=19.157523566592534
length: 270
fps=19.17786571572879
length: 280
fps=19.179405902201264
length: 290
fps=19.191366861751973
length: 300
fps=19.1906491851741
length: 310
fps=19.179568576982227
length: 320
fps=19.195247026502646
length: 330
fps=19.208179441868264
length: 340
fps=19.215569600713355
length: 350
fps=19.226686902273496
length: 360
fps=19.232920607360814
length: 370
fps=19.236069136892027
length: 380
fps=19.24671430176637
length: 390
fps=19.257166548749606
length: 400
fps=19.264782372366604
length: 410
fps=19.27072526922419
length: 420
fps=19.26644721940641
length: 430
fps=19.27420648287631
length: 440
fps=19.280633224313547
length: 450
fps=19.292573304446215
length: 460
fps=19.302845257515294
length: 470
fps=19.305520153209212
total length=480
0
0
10
0
20
0
30
0
40
0
50
45792
60
70452
70
70452
80
70452
90
70452
100
70452
110
70452
120
70452
130
70452
140
70452
OpenCV: FFMPEG: tag 0x5634504d/'MP4V' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'`

nice work, but one issue

Traceback (most recent call last):
File "./scripts/DUTStabilizer.py", line 111, in
generateStable(model, inPath, outPath, outPrefix, maxlength, args)
File "./scripts/DUTStabilizer.py", line 57, in generateStable
origin_motion, smoothPath = model.inference(x.cuda(), x_RGB.cuda(), repeat=args.Repeat)
File "/home/gary/vs/deeplearn/DUTCode/models/DUT/DUT.py", line 273, in inference
for i in range(len(kpts) - 1)]
File "/home/gary/vs/deeplearn/DUTCode/models/DUT/DUT.py", line 273, in
for i in range(len(kpts) - 1)]
File "/home/gary/vs/deeplearn/DUTCode/models/DUT/MotionPro.py", line 105, in inference
motion, gridsMotion, _ = self.homoEstimate(concat_motion, kp)
File "/home/gary/vs/deeplearn/DUTCode/utils/ProjectionUtils.py", line 170, in multiHomoEstimate
pred_Y = KMeans(n_clusters=2, random_state=2).fit_predict(motion_numpy)
File "/home/gary/anaconda3/envs/DUTCode/lib/python3.6/site-packages/sklearn/cluster/kmeans.py", line 1122, in fit_predict
return self.fit(X, sample_weight=sample_weight).labels

File "/home/gary/anaconda3/envs/DUTCode/lib/python3.6/site-packages/sklearn/cluster/_kmeans.py", line 1033, in fit
accept_large_sparse=False)
File "/home/gary/anaconda3/envs/DUTCode/lib/python3.6/site-packages/sklearn/base.py", line 420, in _validate_data
X = check_array(X, **check_params)
File "/home/gary/anaconda3/envs/DUTCode/lib/python3.6/site-packages/sklearn/utils/validation.py", line 72, in inner_f
return f(**kwargs)
File "/home/gary/anaconda3/envs/DUTCode/lib/python3.6/site-packages/sklearn/utils/validation.py", line 645, in check_array
allow_nan=force_all_finite == 'allow-nan')
File "/home/gary/anaconda3/envs/DUTCode/lib/python3.6/site-packages/sklearn/utils/validation.py", line 99, in _assert_all_finite
msg_dtype if msg_dtype is not None else X.dtype)
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

I use default dataset in folder images, thanks~

Training code and metrics calculation code.

Dear Annbless
Thanks for your great work for video stabilization!
I am very respect your work.
Do you have any plans to release the training code and metrics calculation for this project in the near future?
Thank you for an early answer!

CANT INSTALL CUPY

I cant install the cupy version specified. I tried many other versions but had other problems infront.Please help me out

key points extraction in textureless region

Excellent job, but I have a question. When there is a large area of sky in the picture with little texture, can the key point detection model based on CNN be able to effectively extract the key points?

full training code and data

Dear Xu,May I ask when the complete training code you mentioned earlier can be released? I am eager to learn about it,please!

Error on some videos

Stabiling using the DUT model

Namespace(InputBasePath='images/', MaxLength=1200, MotionProPath='ckpt/MotionPro.pth', OutNamePrefix='', OutputBasePath='results/', PWCNetPath='ckpt/network-default.pytorch', RFDetPath='ckpt/RFDet_640.pth.tar', Repeat=50, SingleHomo=False, SmootherPath='ckpt/smoother.pth')
-------------model configuration------------------------
using RFNet ...
using PWCNet for motion estimation...
using Motion Propagation model with multi homo...
using Deep Smoother Model...
------------------reload parameters-------------------------
reload Smoother params
successfully load 12 params for smoother
reload RFDet Model
successfully load 100 params for RFDet
reload PWCNet Model
reload MotionPropagation Model
successfully load 21 params for MotionPropagation
Traceback (most recent call last):
File "./scripts/DUTStabilizer.py", line 111, in
generateStable(model, inPath, outPath, outPrefix, maxlength, args)
File "./scripts/DUTStabilizer.py", line 42, in generateStable
image = image * (1. / 255.)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
Stabiling using the DIFRINT model

Namespace(InputBasePath='images/', OutputBasePath='results/', cuda=True, desiredHeight=480, desiredWidth=640, modelPath='ckpt/DIFNet2.pth', n_iter=3, skip=2, temp_file='./DIFRINT_TEMP/')

Iter: 1
C:\Users\Milan\miniconda3\envs\dutcode\lib\site-packages\torch\nn\functional.py:2705: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
Frame: 10/4673

Dut stabilizer not working, difrint works fine...
on windows 10, miniconda envionment...these videos are not in classic res....they are smaller than 640x480pix
please tell me where I am wrong?

How to inference with video with smaller resolution

Hello,

Thanks for your great work for video stabilization!

How can I inference with video with smaller resolution, such as 114*180? I modified WIDTH and HEIGHT in config file, but when I tried on my data, it occured some error, like ValueError: cannot reshape array of size 216 into shape (9, 11, 2). It seems like have something wrong in MotionPro.py.

Looking forward to your reply, thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.