GithubHelp home page GithubHelp logo

easymocap's People

Contributors

baojunshan avatar carlosedubarreto avatar chingswy avatar gcgeng avatar jfzhang95 avatar lilien86 avatar linyu0219 avatar notyetterminal avatar peabody124 avatar pengsida avatar raypine avatar rdenubila avatar sanggusti avatar tejaswivg avatar woo1 avatar xk-huang avatar yqtl avatar yzhang-gh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easymocap's Issues

The up vector for the world coordinate, the orientation of cameras, and the fitted SMPL parameters.

After successfully running the Quick Start code on the example dataset, I am trying to render the same views with fitted SMPL parameters and camera matrices using PyTorch3D. However, I encounter some problems with the transformation between coordinates.

First, I use the SMPLLayer from this repo to generating an SMPL mesh from Rh, Th, Pose, Shape parameters. This process rotates the mesh to head up to the positive-Z direction from the positive-Y direction in the original SMPL coordinate. Is this relevant to the up vector of the world coordinate? And how does this world coordinate's configuration like (up vector, ground plane)?

Second, in this repo you render SMPL with PyRender, but I have difficulty in render the same view with PyTorch3D. According to your code, you first rotate all the vertices for 180 deg around the X-axis, then transform them with the inverse transform of the RT matrix of the camera. What is the goal of this rotation and why not directly feeding the extrinsic parameters into the camera?

Thanks in advance!

How to transform smpl from the world coordinate to the smpl coordinate?

Hi,

Thanks for the great work! I have a question about how to get the params['Rh'] and params['Th'] when I want to transform smpl from the world coordinate to the smpl coordinate. There are camera intrinsic parameter, extrinsic parameter and original SMPL parameters(pose, betas), and I want to get the new global rotaion and translation.

Can it work with 2 or 3 cameras?

I'm trying to undertand how to calibrate the cameras, but I dont have a way to record with 4 cameras.
Is it possible to make it work properly with 2 or 3 cameras only?

thanks a lot, by far this solution seems the best on I've seen.

Whole body 3d keypoints

Thanks for this pre-release of wonderful work. We are eagerly waiting for whole body keypoints and SMPL support. Is there any expected date for next commit?

Camera extrinsic parameters

Thank you for your perfect work. I have a question about the extri.yml of the provided example multiview dataset. In the provided extri.yml, what's the difference between the R_1 and Rot_1? If I prepared my own datasets, should I put the rotation vector of camera in Rot_1? What's more, how to obtain the R_1?

Another question is if I obtained the smpl parameter in json, how can I obtain the mesh?

Thank you very much.

Visualisizer with SMPLX-model possible?

Hello,
after trying different options inside of o3d_scene.yml i could only get the skeleton model to run. I assume i must change some arguments for:

body_model:
module: "easymocap.visualize.skelmodel.SkelModel"
args:
body_type: "body25" <- bodyhand? (but its not working, i was checking config.py for this but i guess i made some mistakes)
joint_radius: 0.02
gender: "neutral"
model_type: "smpl" <- smplh or smplx? (not working either)

On your example-page there is an animated fullbody-geo (smplx?) shown so i guess its possible to visualize a whole body right?
Thanks again ...

reconstruction issue

#25 (comment)
i have made changes according to the comments in mvip and body_model
now it is is showing this error shutil not defined in shutil.cpoy

Errors on Linux (Ubuntu) when running quickstart example

First of all thank you very much for this great repo! When trying the quick-start example i'm facing some errors where i'd really appreciate some help:

  1. After processing the videos:

Traceback (most recent call last):
File "scripts/preprocess/extract_video.py", line 267, in
join(args.path, 'openpose_render', sub), args)
File "scripts/preprocess/extract_video.py", line 56, in extract_2d
os.chdir(openpose)
FileNotFoundError: [Errno 2] No such file or directory: '/media/qing/Project/openpose'

  1. After triangulation:

-> [Optimize global RT ]: 0.9ms
Traceback (most recent call last):
File "apps/demo/mv1p.py", line 109, in
mv1pmf_smpl(dataset, args)
File "apps/demo/mv1p.py", line 71, in mv1pmf_smpl
weight_shape=weight_shape, weight_pose=weight_pose)
File "/home/frankfurt/dev/EasyMocap/easymocap/pipeline/basic.py", line 77, in smpl_from_keypoints3d2d
params = multi_stage_optimize(body_model, params, kp3ds, kp2ds, bboxes, Pall, weight_pose, cfg)
File "/home/frankfurt/dev/EasyMocap/easymocap/pipeline/basic.py", line 18, in multi_stage_optimize
params = optimizePose3D(body_model, params, kp3ds, weight=weight, cfg=cfg)
File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 301, in optimizePose3D
params = _optimizeSMPL(body_model, params, prepare_funcs, postprocess_funcs, loss_funcs, weight_loss=weight, cfg=cfg)
File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 246, in _optimizeSMPL
final_loss = fitting.run_fitting(optimizer, closure, opt_params)
File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize.py", line 38, in run_fitting
loss = optimizer.step(closure)
File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/lbfgs.py", line 307, in step
orig_loss = closure()
File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 227, in closure
new_params = func(new_params)
File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 121, in interp_func
params[key][nf] = interp(params[key][left], params[key][right], 1-weight, key=key)
IndexError: index 800 is out of bounds for dimension 0 with size 800

Then triangulation runs a second time:
(with same error like above) and this one:

Traceback (most recent call last):
File "apps/demo/mv1p.py", line 108, in
mv1pmf_skel(dataset, check_repro=True, args=args)
File "apps/demo/mv1p.py", line 35, in mv1pmf_skel
images, annots = dataset[nf]
File "/home/frankfurt/dev/EasyMocap/easymocap/dataset/mv1pmf.py", line 73, in getitem
annots = self.select_person(annots_all, index, self.pid)
File "/home/frankfurt/dev/EasyMocap/easymocap/dataset/base.py", line 461, in select_person
keypoints = np.zeros((self.config['nJoints'], 3))
KeyError: 'nJoints'

Thank you very much!

Error When use setup.py

I created a new environment with python 3.7 then i have some error when use : python setup.py develop --user ( can not use python3)
And i have this one
image

Error using Mano

When trying Mano, looks like when the hand get hidden in one view, it throws error.

image

it can be coincidence, bu I think the error is when the hand get occluded in some view.

image

the error is in frame 50, and this print above is from frame 40

Using checkboard different from 9x6

Hello...

I was testing doing the extrinsic calibration using a checkboard that is 5x3 and the calibration goes fine (without error) but when doing the triangulation some odd things are happening.

image

the points are out of place. And I took care to measure the size of the squares and used in meter. But it is not working.

The interesting point is that when I used a checkboard 9x6 it worked nice.

Do you have an idea of what could be the issue? I'm using this 5x3 because the square is bigger so it would be easier to do the calibration.

Index Error

I was running the code for smpl-x reconstruction on my own videos using the code ->
data='/content/0_input/data'
!python3 apps/demo/mv1p.py $data --out $data/output/smplx --vis_det --vis_repro --undis --sub_vis 1 2 --body bodyhandface --model smplx --gender male --write_smpl_full
But I got the following index error. If anyone can help me solving this issue.
Thanks.
Screenshot from 2021-07-04 16-26-25

OpenPose installation on Ubuntu 18.04

  • since this took me a while to figure out on how to successfully compile openpose on Ubuntu (18.04) you might find this helpful (openpose will be needed if you want to run EasyMocap on your own videos)

  • also don't forget to set the path-option pointing to your openpose-folder
    mine goes like:

data="/media/frankfurt/Walhalla/Ubuntu/dev/EasyMocap/data/testData"
openpose_path="/media/frankfurt/Walhalla/Ubuntu/dev/openpose"
python3 scripts/preprocess/extract_video.py ${data} --handface --openpose ${openpose_path}
(you can put these 3 lines into a shell-file and run it from the terminal assuming you are on the EasyMocap-root-folder)

here is the link:
https://medium.com/@erica.z.zheng/installing-openpose-on-ubuntu-18-04-cuda-10-ebb371cf3442
but you can skip this step: "Important: Replace CV_LOAD_IMAGE_GRAYSCALE with cv::IMREAD_GRAYSCALE”." (this has been fixed in the latest version)

beware! there is one hint missing in this tut. you must set the "BUILD_PYTHON" option inside cmake-gui.
(check this link with another description on how to compile openpose)
https://robinreni96.github.io/computervision/Python-Openpose-Installation/

Conversion to FBX

I saw the great work you all done to have the bvh conversion working.

here is my approach to convert to FBX adapting the code from VIBE mocap:

install pandas
pip install pandas

install bpy via pip
pip install bpy

if pip install bpy doesnt work, install using conda (if you have conda installed)
conda install -c kitsune.one python-blender

copy blender 2.82 files to the python executable.
If you use conda on the base enviroment or virtual enviroment, you can fid tha directory using

conda env list

Copy the 2.82 folder from blender download nd paste to the enviroment you are using.

and save the "fbx_output_EasyMocap" at this folder:"scripts/postprocess"

and finish doing these instructions of downloading the SMPL for unity
https://github.com/mkocabas/VIBE#fbx-and-gltf-output-new-feature

then you can test using the demo video file

python code/demo_mv1pmf_smpl.py demo_test --out out_demo_test --end 300 --vis_smpl --undis --sub_vis 1 7 13 19 --gender male

then test the fbx conversion with
python scripts/postprocess/fbx_output_EasyMocap.py --input out_demo_test/smpl/ --output out_demo_test/easymocap.fbx --fps_source 30 --fps_target 30 --gender male

The fbx conversion script:
fbx_output_EasyMocap.zip

Camera Synchronization

Multiple cameras are used to reconstruct each view, at the same time. So how to avoid synchronization delay and improve the influence of calibration error between cameras.

openpose confidences being mistakenly set to 0.0

Easiest to explain this issue with images. On the left, I have the openpose render file openpose_render/0/000150_rendered.png On the right, I have the file output by EasyMocap output/smpl/detec/000000.jpg. As you can see, the detection box renders, but not the pose.

I enabled display of confidences, and can see that they seem to be 0.0:

However, in the annot file, it is clear that the original openpose confidences are > 0, here is part of the annots/0/000150.json file:

"annots": [
        {
            "bbox": [
                88.82220000000004,
                140.55161904761928,
                365.26980000000003,
                1246.255619047619,
                0.7000167619047618
            ],
            "personID": 0,
            "keypoints": [
                [
                    328.614,
                    241.337,
                    0.691843
                ],
                [
                    188.964,
                    258.823,
                    0.800462
                ],
                [
                    262.121,
                    248.374,
                    0.727015
                ],
                [
                    185.618,
                    370.456,
                    0.847737
                ],
                [
                    265.899,
                    524.02,
                    0.813215
                ],
                [
                    126.208,
                    262.194,
                    0.76241
                ],
                [
                    0.0,
                    0.0,
                    0.0
                ],
                [
                    0.0,
                    0.0,
                    0.0
                ],
                [
                    199.425,
                    614.741,
                    0.612223
                ],
                [
                    230.901,
                    614.821,
                    0.588228
                ],
                [
                    248.3,
                    859.096,
                    0.688187
                ],
                [
                    237.92,
                    1075.63,
                    0.807521
                ],
                [
                    164.534,
                    607.861,
                    0.56494
                ],
                [
                    202.984,
                    852.107,
                    0.781301
                ],
                [
                    126.22,
                    1071.97,
                    0.766065
                ],
                [
                    342.563,
                    209.956,
                    0.766059
                ],
                [
                    0.0,
                    0.0,
                    0.0
                ],
                [
                    300.644,
                    181.94,
                    0.880711
                ],
                [
                    0.0,
                    0.0,
                    0.0
                ],
                [
                    199.387,
                    1099.76,
                    0.307208
                ],
                [
                    181.924,
                    1075.6,
                    0.245197
                ],
                [
                    112.19,
                    1096.46,
                    0.724447
                ],
                [
                    325.107,
                    1092.94,
                    0.762424
                ],
                [
                    307.698,
                    1103.36,
                    0.788679
                ],
                [
                    230.745,
                    1100.03,
                    0.77448
                ]
            ],

Has anyone else had this issue / know where in the code might the confidences get changed to 0? I've tried looking but haven't found anything yet. The issue seems to occur when the person is too close to the camera, as the detec file appears to render correctly when the person walks further away from the camera (example here)

Calibration example

Can you share the calibration footage?

I tried make the calibration of my own footage,but it was not working.
I would like to test a footage that works, so I could compare and see what I'm doing wrong.

Can you share the calibration footage for intrinsic and extrinsic calibration?

If you can share the information on the camera used it would be also great, because I have a feeling that the type of camera can be the issue on my calibration tests.

Btw, now it much easier to do the calibration, you did an amazing job. Looks like it just need some tweaks

Thanks

Different cameras

Hello,

I was wondering if you can use different camera models for each camera, or do they all have to be the same model?

Thanks!

mode openpose change yolo-hrnet error

When I ran the script extract_video-.py, I changed the mode default openpose to yolo-hrnet and this error occurred. I'm not sure if this is caused by missing files
54

smpl model

Hello, I'm glad to see such an excellent project. When I run it, I find that the smpl model "data / smplx / smpl does not exist!" is missing Which smpl model is used in this project? Can you provide the download link, thank you!

Hand/finger pose estimation

Hi, I saw that openpose does this pose estimation, but, from what I saw, doesnt seems that keypose3d has this points.
Is is difficult to add there?

The output is all zeros

Hi.
After rendering, the output keypoints are all 0. Is there any possible error that could cause this problem?

Using Easymocap commercially?

Thanks a lot for this amazing work.
I'm building an addon that makes easier to people import easymocap data inside blender, and some of the people that uses this addon do it for commercial work.

I wanted to ask if easymocap can be used commercially. so I can answer better the people that asks that.

How to interpret Keypoints3d?

I was able to run the script to get the pose estimation.
Now I'm trying to use it in blender, but for that I have to convert the json to fbx, bvh ou something loadable inside blender.

I saw the keiponts3d files and was thinking of trying to create some sort of conversor to BVH files.
But I dont have much info on these files.

Someone give more info on it?

I saw this on the documentation.

The data in keypoints3d/000000.json is a list, each element represents a human body.

{
    'id': <id>,
    'keypoints3d': [[x0, y0, z0, c0], [x1, y1, z0, c1], ..., [xn, yn, zn, cn]]
}

I guess the x, y and z are the usual 3d coordinates, but what does "c" means?

Thanks.

ModuleNotFoundError: No module named 'easymocap'

Hi, Im stuck when trying to run
python apps/demo/mv1p.py 0_input/sample --out 1_output/sample --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --vis_smpl

It gives me this error:
Traceback (most recent call last): File "apps/demo/mv1p.py", line 9, in <module> from easymocap.smplmodel import check_keypoints, load_model, select_nf ModuleNotFoundError: No module named 'easymocap'

I have tried reinstalling but same results. Any ideas?

heres the screenshot:
image

What is the recommended camera model / spec?

Hi, May I know the model of the camera you use to prepare the sample dataset? Does it have to be genlock or we can just synchronize it manually using clapperboard? I'm planning to try your code on a set of IP cameras but I'd love to hear you though on this setup. Thank you.

Faster Manual Calibration

Hello, its not an issue.

Just found a faster way to calibrate the checkboard for the extrinsic parameters and I would like to share

first install labelme using the command
pip install labelme

then follow this video.
https://youtu.be/oCNE-yRaFAc

[Quick Start] Lack of annots in the example dataset.

First, I would like to thank you for your efforts in this amazing repo! I was trying to go through Quick Start and found a gap between the example dataset and the code.

First, I ran scripts\preprocess\extract_video.py to extract frames from the given videos with --no2d as the openpose results are already included in the data pack.
Then, I ran mv1p.py to fit SMPL-X parameters as instructed, but the program just cannot correctly load the annotations. The MV1PMF dataset class loads JSON with read_annot in easymocap\mytools\file_utils.py, which is different from the default key formats in openpose's results. It looks like that there should be a script to transform JSON files from openpose before fed into read_annot.

Did I miss anything? Thanks in advance.

MANO processing

I found the new options to process hands only, and whe trying, it gives this error

OSError: data/smplx\J_regressor_mano_RIGHT.txt not found.

image

Where can I download it?

RuntimeError: The size of tensor a (42) must match the size of tensor b (0) at non-singleton dimension 1

I am doing the following operation,
python apps/demo/mv1p.py input/sample --out output/sample --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body bodyhandface --model smplx --gender male --vis_smpl

my python traces are below
File "apps/demo/mv1p.py", line 109, in
mv1pmf_smpl(dataset, args)
File "apps/demo/mv1p.py", line 71, in mv1pmf_smpl
weight_shape=weight_shape, weight_pose=weight_pose)
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pipeline\basic.py", line 62, in smpl_from_keypoints3d2d
params = multi_stage_optimize(body_model, params, kp3ds, kp2ds, bboxes, Pall, weight_pose, cfg)
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pipeline\basic.py", line 30, in multi_stage_optimize
params = optimizePose3D(body_model, params, kp3ds, weight=weight, cfg=cfg)
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pyfitting\optimize_simple.py", line 300, in optimizePose3D
params = _optimizeSMPL(body_model, params, prepare_funcs, postprocess_funcs, loss_funcs, weight_loss=weight, cfg=cfg)
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pyfitting\optimize_simple.py", line 246, in _optimizeSMPL
final_loss = fitting.run_fitting(optimizer, closure, opt_params)
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pyfitting\optimize.py", line 38, in run_fitting
loss = optimizer.step(closure)
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pyfitting\lbfgs.py", line 307, in step
orig_loss = closure()
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pyfitting\optimize_simple.py", line 231, in closure
loss_dict = {key:func(kpts_est=kpts_est, **new_params) for key, func in loss_funcs.items()}
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pyfitting\optimize_simple.py", line 231, in
loss_dict = {key:func(kpts_est=kpts_est, **new_params) for key, func in loss_funcs.items()}
File "D:\mMocap\EasyMocap-master\apps\demo\easymocap\pyfitting\lossfactory.py", line 63, in hand
diff_square = (kpts_est[:, 25:25+42, :3] - self.keypoints3d[:, 25:25+42, :3])*self.conf[:, 25:25+42]
RuntimeError: The size of tensor a (42) must match the size of tensor b (0) at non-singleton dimension 1

Multiview detection different camera-types possible

I'd like to ask if its possible to use different kind of cameras (like 3 or 4 different smartphones) for the same motion-take? After checking the intrinsics-file of your example its rather obvisous that they were all of the same type but i'd like to use different smart-phones to capture the action of an actor.
Also providing an out of the box calibration-tool-box is super helpful. Thank you very much for all these effords your putting into this project! (Maybe add a comment to the gridsize-option that its in meters)

Found another issue entry here with the same question

Google colab

Is there is a way to use this project on Google colab?

extract_video error on windows

execution Extract_video script on windows gives error

But changin the line 30 from

for cnt in tqdm(range(totalFrames), desc='{:-10s}'.format(os.path.basename(videoname))):

to

for cnt in tqdm(range(totalFrames)):

Worked (i got the code from version 0.1)

A question about multiview

I'm thankful for your this great work. I check your work output and it gives output of related camera. I mean if input is 3 cameras then it can output of related camera (input of camera 1 --> output of camera 1, input of camera 2 --> output of camera 2, input of camera 3 --> output of camera 3) or we can get even single output by selected camera.

Here my question is that there is no fusion in multiview inputs to get a fused output of all input cameras as in MVPose, cross_view_pose video etc. So is there anyway to get output like these projects?

About camera parameters

Hi!
After run the demo, I have a question about the camera parameters. Whether the rotation vector transforms the camera coordinate to the world coordinate or transforms the world coordinate to the camera coordinate? Because I found that my dataset provides world to camera rotation vector, maybe need to inverse.

AttributeError

Hello, I was running the easy mocap. I was doing the intrinsic calibration of camera using the script calib_intri.py but got this error. Anyone found similar issues or help me in solving this issue..
Thanks..
image

Triangulated Points are crazy

Hi, thanks a lot for such great code.

I would like to ask for help. I did my intrinsic lens calibration, using a tutorial on OpenCV page, and dit the extrinsic calibration with the code provided here in the git hub, but my result is weird.

When I see the imagens on the keypoints images they look fine, like this one
image

When I see the "repro" folder, the image looks like this (insted of having the skeleton on the body, there is only a point folllowing my face)
image

And when I render with SMPL, seems that the model, goes where that white dot was
image

I dont know what to do know. Can you give me some directions?

Thanks a lot

Quick start error

Thanks for this amazing work. I am just a bit confused as I keep getting this error after running the script to extract the SMPLx parameters. The problem is within this function smpl_from_keypoints3d2d as it returns:
load_weight_shape() missing 1 required positional argument: 'opts'. I am not sure what I am missing. Any guidance or hints would be appreciated.

Thanks

Visualization with Open3D

In current version, we visualize the skeleton, SMPL mesh and scenes with pyrender, which is not available for realtime applications.
If someone is interested in this feature, you can make a pull request.

Some calibration-tips:

Since calibration is extremely important for a successful capture-session I'd like to contribute some experiences and thoughts. I made a lot of mistakes in the past which can be avoided by following some rules of thumb:

  1. Sensor-type:
  • Should be ideally a global shutter sensor where the whole picture (every pixel) is captured at the same time, rolling shutter can lead to "smear" images (but if the frame-rate is high enough rolling shutter sensors are equal in quality)

  • Although mobile-cameras are easily available it can be hard to get the camera-sensor-type/name (in case of apple-mobile-devices sometimes impossible). That's important because you should keep in mind that without a knowledge about the specific sensor-size and pysical-pixel-size you can't really get world units back (opencv camera-matrix expresses focal length in pixels (https://answers.opencv.org/question/139166/focal-length-from-calibration-parameters/ here's a good read)

  1. Lenses / Focus:
  • It's also quite important to keep the focal length the same over the whole record session otherwise the camera-matrix must be adjusted dynamically. If you want to re-use your intrinsics-file your focal-length must never change (this might be problematic for mobile-cams where auto-focus is almost always set to default on most apps) For apple devices though I've found "ProCam8" the only app where you can numerically adjust a manual focus (and therefore re-use these settings)

  • Another thing to keep in mind is that actually every mobile device is pre-undistorting the recorded raw images which is on one hand nice because you can then set the distortion-parameters to 0 (and you should do so, otherwise you will undistort an already undistorted image which will lead to strange artefacts along the boundaries) but on the other hand without a proper knowledge about these non-linear coeffs a re-projection is impossible (correct me here if i'm wrong)

  • So in general i'd recommend either a fixed-focus-lens or a manual focus-lens <- must be locked after calibration (both ideally with <1% distortion) or if that's not possible you should do the undistortion and rectifying yourself (the calib-tools here are doin this also for you)

  1. Calibrationpatterns/pattern-size:
  • On this you can actually write a whole book (although most tutorials I've found never really dig into this). You should always calibrate for the distance of your desired record-object. The board should always have a slide angle (never completely co-planar with the cam but also never too much angled -> +/- 30-40 degrees are always working for me). The larger the distance the bigger the pattern-image should be (ideally the pattern-image should cover the whole image, but this is very often impossible. Therefore shoot enough images (about 10-20 especially at the edges of the image if you are facing distortions due to your lens -> otherwise your dist-coeffs will lead to bad undistortion).

  • Try to get the best chestboard-image-quality you can get! It should be non-reflective, high-contrast, as planar as possible and with a very good print-quality

  • One last thought here is to keep in mind that there should either be an automatism or an heuristic check about the quality by evaluating the reprojection-error of every image that is supposed to get later feeded into the calibration-algorithm. If that's not the case even one single outlier (bad calibration-image) can spoil the overall calibration. I'm used to use matlab + calibration-toolbox which gives you the possibility to check for the rms-error of every single image (you can also use the matlab camera-matrix with opencv but then don't forget to transpose the matrix since Matlab-matrices are column-major and opencv is row-major)

  • For most of my purposes a "good" overall-rms-error is between 0.1 and 0.3 pixels (depending on the needed precision and hardware-quality that I can use)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.