GithubHelp home page GithubHelp logo

dlr-rm / blenderproc Goto Github PK

View Code? Open in Web Editor NEW
2.5K 45.0 429.0 93.65 MB

A procedural Blender pipeline for photorealistic training image generation

License: GNU General Public License v3.0

Python 98.93% TeX 1.07%
blender-pipeline segmentation depth-images camera-positions suncg-scene camera-sampling blender-installation synthetic blender rendering

blenderproc's People

Contributors

5trobl avatar abahnasy avatar andrewyguo avatar andreyyashkin avatar apenzko avatar cornerfarmer avatar cuteday avatar davidrisch avatar hansaskov avatar harinandan1995 avatar ideas-man avatar jascase901 avatar joe3141 avatar mansoorcheema avatar markusknauer avatar martinsmeyer avatar marwinnumbers avatar maximilianmuehlbauer avatar mayman99 avatar moizsajid avatar neixlo avatar probabilisticrobotics avatar saprrow avatar sebastian-jung avatar themasterlink avatar thodan avatar victorlouisdg avatar wangg12 avatar wboerdijk avatar zzilch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blenderproc's Issues

writer.CocoAnnotationsWriter creating annotations with empty segmentations and bboxes with 0 area

Error Details

The creation of empty segmentation map lists or bboxes which have a area of zero are generated in certain circumstances with the current writer.CocoAnnotationsWriter. This did not appear in an older version of BlenderProc which we were using a few months ago. Inspection of generated renders show that these cases have objects which are likely sub-pixel in size (appearing in the render because of effects such as anti-aliasing). We mainly have a problem with this because these annotations are incompatible with Facebook's Detectron.

This problem is subjective since these segmentations might have a use-case. If this behavior and its consequences are intentional I will probably put in a feature request to enable the old functionality.

Examples in generated data

Annotation

{'id': 35267, 'image_id': 9142, 'category_id': 3, 'iscrowd': 0, 'area': [0], 'bbox': [423, 0, 1, 0], 'segmentation': [], 'width': 512, 'height': 512}

Image (likely source of error marked)

rgb_9142

Recreation

  1. Copy the examples/bop_object_pose_sampling/config.yaml
  2. Use BOP or provide your own modeling files. (make sure there will be cases where objects are outside the cameras point of view / have overlap with the cameras viewbox border)
  3. Run for a large set of images (total_noof_cams > 2500)
  4. Examine the output and it will likely contain annotations similar to the one above.

Errors Messages from Detectron

Crash Message 1

ERROR [06/12 12:20:49 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/engine/train_loop.py", line 132, in train
    self.run_step()
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/engine/train_loop.py", line 209, in run_step
    data = next(self._data_loader_iter)
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/data/common.py", line 142, in __iter__
    for d in self.dataset:
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 838, in _next_data
    return self._process_data(data)
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
    data.reraise()
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 1.
Original Traceback (most recent call last):
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/data/common.py", line 41, in __getitem__
    data = self._map_func(self._dataset[cur_idx])
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/utils/serialize.py", line 23, in __call__
    return self._obj(*args, **kwargs)
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/data/dataset_mapper.py", line 139, in __call__
    annos, image_shape, mask_format=self.mask_format
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/data/detection_utils.py", line 321, in annotations_to_instances
    segms = [obj["segmentation"] for obj in annos]
  File KeyError: Caught KeyError in DataLoader worker process 2.
Original Traceback (most recent call last):
  File "C:\Users\URI PC\Anaconda3\envs\Detectron22\lib\site-packages\torch\utils\data\_utils\worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "C:\Users\URI PC\Anaconda3\envs\Detectron22\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\URI PC\Anaconda3\envs\Detectron22\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "c:\users\uri pc\detectron2\detectron2\data\common.py", line 39, in __getitem__
    data = self._map_func(self._dataset[cur_idx])
  File "c:\users\uri pc\detectron2\detectron2\data\dataset_mapper.py", line 131, in __call__
    annos, image_shape, dataset_dict, mask_format=self.mask_format
  File "c:\users\uri pc\detectron2\detectron2\data\detection_utils.py", line 240, in annotations_to_instances
    polygons = [obj["segmentation"] for obj in annos]
  File "c:\users\uri pc\detectron2\detectron2\data\detection_utils.py", line 240, in <listcomp>
    polygons = [obj["segmentation"] for obj in annos]
KeyError: 'segmentation'"/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/data/detection_utils.py", line 321, in <listcomp>
    segms = [obj["segmentation"] for obj in annos]
KeyError: 'segmentation'

Crash Message 2

ERROR [06/12 12:17:15 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/engine/train_loop.py", line 132, in train
    self.run_step()
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/engine/train_loop.py", line 215, in run_step
    loss_dict = self.model(data)
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/modeling/meta_arch/rcnn.py", line 123, in forward
    _, detector_losses = self.roi_heads(images, features, proposals, gt_instances)
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/modeling/roi_heads/roi_heads.py", line 669, in forward
    losses.update(self._forward_mask(features, proposals))
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/modeling/roi_heads/roi_heads.py", line 773, in _forward_mask
    return self.mask_head(mask_features, proposals)
  File "/home/ryan/.pyenv/versions/objDet3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/modeling/roi_heads/mask_head.py", line 190, in forward
    return {"loss_mask": mask_rcnn_loss(x, instances, self.vis_period)}
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/modeling/roi_heads/mask_head.py", line 63, in mask_rcnn_loss
    gt_masks_per_image = instances_per_image.gt_masks.crop_and_resize(
  File "/home/ryan/Documents/NIUVT/objDet/detectron/detectron2/detectron2/structures/instances.py", line 60, in __getattr__
    raise AttributeError("Cannot find field '{}' in the given Instances!".format(name))
AttributeError: Cannot find field 'gt_masks' in the given Instances!

blendtorch - domain randomization

Hey,

I've just found your project and papers - very interesting work! We researching in a similar direction: we use Blender to massively randomize a scene in real-time and use this annotated data for training neural networks. Todo so, we've created blendtorch

https://github.com/cheind/pytorch-blender

which allows us to stream data from parallel Blender instances directly into PyTorch data pipelines. We avoid generating intermediate files, or to see the data generation process decoupled from network training, because we install a feedback channel that allows us to adapt the simulation to current training needs.

I thought I reach out to you guys in case you a research fit with your procedural generation pipeline?

Generating segmentation masks and bounding box from replica

Hi,

First of all, Thank you so much for this awesome project. Looks really well documented and useful!!

I am trying to use BlenderProc, hoping to generate some data from replica dataset. Specifically, I need RGB image, depth image, camera instinsics and extrinsics, object bounding box and segmentation masks.

I think that Camera State Writer will give me the intrinsics and extrinsics. I am unsure how I could get depth and segmentation and object bounding boxes.

For Segmentation masks, I tried adding the following module to config.yaml file but it is throwing an error:

  {
      "module": "renderer.SegMapRenderer",
      "config": {
        "use_alpha": True
      }
    },
Traceback (most recent call last):
  File "/home/ayushj2/replica/Replica-Dataset/BlenderProc/src/run.py", line 37, in <module>
    pipeline.run()
  File "./src/main/Pipeline.py", line 92, in run
    module.run()
  File "./src/renderer/SegMapRenderer.py", line 287, in run
    "value.".format(current_obj.name, current_attribute, used_attribute))
Exception: The obj: mesh does not have the attribute: cp_category_id, striped: category_id. Maybe try a default value.

Can you please let me know the right away to extract segmentation, depth and bounding boxes? Thank you so much!!

Some confusion concept about multiple object's rotation_euler?

I am a freshman for pose estimation. I have some confusion about the basic conceptual problem with the camera pose. If the rendered scene exists many objects and whether existing 6D information for each object in a single image or maybe only one 6D information for this image no matter there are one or many objects in an image.

Initializer parameters

I was checking on the initializer script https://github.com/DLR-RM/BlenderProc/blob/787f1873d5dd786305728c9973849c6c5f50e083/src/main/Initializer.py and I would like to ask what kind of parameters I can specify through the config.yaml for the SUNCG dataset and.

    {
      "module": "main.Initializer",
      "config": {
        "global": {
          "output_dir": "<args:1>"
        }
      }
    }

For example I see that you initialize a camera in line

# Create the camera
can this be disabled and possibly initiated later on with the camera module. Also to set horizon to dark do I just need to include under global the variable "horizon_color": [0.0, 0.0, 0.0]?

Thanks.

Models

Would you consider providing pre-trained or fully trained models?

How to add backround image for rgb image

There are some papers used background images which are randomly selected from pascal voc or sun dataset, and is there any method to add background images in this code? Or I have to crop image with segmentation masks and pastes it on the background image after rendering?

Rendering with transparent background

1_colors
01
When I try to render with a transparent background, I get a solid black background instead.
I've added
bpy.context.scene.render.film_transparent = True
and
bpy.context.scene.render.image_settings.color_mode ='RGBA'
I've checked that these settings work in another script, so something is probably overriding this in BlenderProc. Have you any suggestions on how to fix this? I've attached an example of each case with a shapenet chair

How to turn off transparency for objects?

Hi, I'm trying to project depth map into world coordinates using the depth obtained from blender, but I've noticed that for transparent materials the depth does not behave the same as solid ones. Is there an option to turn off transparency?

One light position per camera position

Hi, I created a file with a bunch of camera coordinates and a file of light coordinates. When I generated the pictures, I noticed that there is one light per coordinate in every scene. What I actually want is one light per coordinate per scene, i.e. there can only be one light source per image and I want to be able to specify its position with coordinates. Is that possible somehow?

About bop object physics positioning

I tried the example of bop object physics positioning. I found that some objects like benchvise, lamp, duck, glue, can, driller, were always lying on the ground. I used the example config to render about 100 images, I could not find any image in which any of these objects was standing. What was the problem?

Make coco annotations compatible with official Coco format

Currently the annotations are created as follows.

        annotation_info = {
            "id": annotation_id,
            "image_id": image_id,
            "category_id": category_id,
            "iscrowd": 0,
            "area": [area],
            "bbox": bounding_box,
            "segmentation": segmentation,
            "width": binary_mask.shape[1],
            "height": binary_mask.shape[0],
        }

while on the format officially provided it is given

annotation{
"id": int, 
"image_id": int, 
"category_id": int, 
"segmentation": RLE or [polygon], 
"area": float, 
"bbox": [x,y,width,height], 
"iscrowd": 0 or 1,
}

As you can see the area is given as a float while currently in BlenderProc it is given as a list of int. I don't think the fact that it is int matters but the fact that it is an list makes it incompatible with certain repos like the official coco api.
I've fixed this on my local BlenderProc by just removing the list brackets.

Mac OS X Support

Right now, BlenderProc struggles to run on Mac OS X, we are working on a solution where we can test BlenderProc automatically on an Apple platform.

Be warned, as most recent Apple HWs contain AMD gpus, which are less supported by cycles than Nvidia gpus the run time could be much longer than claimed by us.

Depth to Pointcloud

Any example for projecting the depth map to point cloud? I want to use it with PointNet.

I created the following script but seems not correct.

# I,  j are indices of the depth map
# z is the depth value

vm = data_object.matrix_world.inverted()  # viewmatrix
pm = data_object.calc_matrix_camera(..)  # projection matrix

def point(i, j, z, vm, pm, width, height):
    x = (i / width)
    y = ((height - j) / height)
    r = (2 * np.array([x, y, z, 1], dtype=np.float32) - 1).T @ np.linalg.inv(vm@pm)
    r /= r[-1]
    return r[:3]

entity sampler repeating first object sample

the entity sampler correctly outputs the image where model has correct rotations, however the rotations sampled during the first frame are repeated and each rendered image is exactly the same (0.hdf5, 1.hdf5 ......)

i have no idea why this is happening the config used for generating the data is here:

# Args: <cam_file> <obj_file> <output_dir>
{
  "version": 3,
  "setup": {
    "blender_install_path": "/home_local/<env:USER>/blender/",
    "pip": [
      "h5py"
    ]
  },
  "modules": [
    {
      "module": "main.Initializer",
      "config":{
        "global": {
          "output_dir": "<args:2>"
        }
      }
    },
    {
      "module": "loader.ObjectLoader",
      "config": {
        "path": "<args:1>"
      }
    },
    {
      "module": "lighting.LightLoader",
      "config": {
        "lights": [
          {
            "type": "POINT",
            "location": [5, -5, 5],
            "energy": 1000
          }
        ]
      }
    },
    {
      "module": "camera.CameraLoader",
      "config": {
        "path": "<args:0>",
        "file_format": "location rotation/value",
        "default_cam_param": {
          "fov": 1
        }
      }
    },
    {
          "module": "manipulators.EntityManipulator",
          "config": {
            "selector": {
              "provider": "getter.Entity",
              "conditions": {
                "name": "OfficeBranch_v5.002",
                "type": "MESH" # this guarantees that the object is a mesh, and not for example a camera
              }
            },
            "rotation_euler": {
              "provider": "sampler.Uniform3d",
              "max":[1.5, 0, 0],
              "min":[1.6, 0, 6.28]
        }
        }
    },
    {
      "module": "renderer.RgbRenderer",
      "config": {
        "output_key": "colors",
        "samples": 5,
        "render_normals": True,
        "normals_output_key": "normals",
        "render_depth": True,
        "depth_output_key": "depth"
      }
    },
    {
      "module": "writer.Hdf5Writer",
      "config": {
        "postprocessing_modules": {
          "depth": [
            {
              "module": "postprocessing.TrimRedundantChannels",
            }
          ]
        }
      }
    }
  ]
}

any suggestion how we can debug with pycharm?

Hi,
I would ask if you can provide some useful practical suggestion on how to debug the code (since in the main run.py, you use Popen to start a new thread), especially if with pycharm.

Some references links would be great

Thanks

Enhancement/PR#24: Change ObjectStateWriter so the attribute name is written as well

This is the issue for the PR#24 to add the "name" attribute to the ObjectStateWriter.

Background:
When the ObjectStateWriter and the Hdf5Writer is in config.yaml the object id and object poses are written to the .hdf5 file. The id corresponds to the blender scene and there is no information which original 3D-object (e.g. obj_000001.ply or ape.ply) the given information belongs to.

Enhancement:
Add the attribute "name" to ObjectStateWriter, so that the name information of the original .ply-object is written to the file as well.

trying to load own .ply models with texture.png like in ycbv dataset

Hello,
i am currently trying to load a own set of cad models in .ply format with Texture files like for ycbv dataset.
I extended the data for that with the name of the dataset. Does the .ply file must have the same header structure than the ones from ycbv dataset ? For example the texture u,v coordinates? Because if i load objects from both datasets, the ycbv and the own, the objects from both gets placed, but in the output image i only see the objects from ycbv dataset.

Thanks in advance.

Rotation and translation matrices are not align with ground truth.

Problem:
When I take the 3D points (.ply file) of an object, rotate and translate them with the given R, t and K of BopWriter and project them onto the image, the projected points are not aligning with the ground truth.

Expected behavior:
The rotated and translated 3D points should align exactly with the ground truth when projected onto the image plane.

Example:
000002
Figure_1
Figure_1_1

Steps to reproduce:
I used the R, t and K from the BopWriter, the "colors" image from the hdf5Writer and the 3D points from the LineMod .ply files. To project the 3d points to the image plane I used functions similar to the following.

def project_3d_to_2d(3d_points, R, t, K):
        # projects the 3d points onto the 2d image plane
        2d_points = np.matmul(3d_points, R.T) + t.T
        2d_points = np.matmul(3d_points, K.T)
        2d_points = 2d_points[:, :2] / 2d_points[:, 2]
        2d_points = np.array(2d_points.astype(np.float32))
        return 2d_points

def visualize_pose(img, 2d_points):
        # visualizes the 2d projection in the image
        plt.imshow(img)
        plt.plot(pts_2d[:, 0], pts_2d[:, 1], '.')
        plt.show()

The config.yaml is a slightly modified version from an old example of bop_object_physics_positioning and looks like this.

# Args: <path_to_bop_data> <bop_datset_name> <bop_toolkit_path> <path_to_cc_textures> <output_dir>
{
  "version": 3,
  "setup": {
    "blender_install_path": "./blender/",
    "pip": [
      "h5py",
      "scikit-image",
      "pypng==0.0.20",
      "scipy==1.2.2"
    ]
  },
  "modules": [
    {
      "module": "main.Initializer",
      "config": {
        "global": {
          "output_dir": "<args:4>",
          "sys_paths": ["<args:2>"]
        }
      }
    },
    {
      "module": "loader.BopLoader",
      "config": {
        "bop_dataset_path": "<args:0>/lm",
        "model_type": "",
        "mm2m": True,
        "sample_objects": True,
        "num_of_objs_to_sample": 3,
        "obj_instances_limit": 1,
        "add_properties": {
          "cp_physics": True
        },
        "cf_set_shading": "SMOOTH"
      }
    },
    {
      "module": "materials.MaterialManipulator",
      "config": {
        "selector": {
          "provider": "getter.Material",
          "conditions": [
          {
            "name": "bop_lm_vertex_col_material.*"
          }
          ]
        },
        "cf_set_specular": {
          "provider": "sampler.Value",
          "type": "float",
          "min": 0.0,
          "max": 1.0
        },
        "cf_set_roughness": {
          "provider": "sampler.Value",
          "type": "float",
          "min": 0.0,
          "max": 1.0
        }
      }
    },
    {
      "module": "constructor.BasicMeshInitializer",
      "config": {
        "meshes_to_add": [
        {
          "type": "plane",
          "name": "ground_plane0",
          "scale": [2, 2, 1]
        },
        {
          "type": "plane",
          "name": "ground_plane1",
          "scale": [2, 2, 1],
          "location": [0, -2, 2],
          "rotation": [-1.570796, 0, 0] # switch the sign to turn the normals to the outside
        },
        {
          "type": "plane",
          "name": "ground_plane2",
          "scale": [2, 2, 1],
          "location": [0, 2, 2],
          "rotation": [1.570796, 0, 0]
        },
        {
          "type": "plane",
          "name": "ground_plane4",
          "scale": [2, 2, 1],
          "location": [2, 0, 2],
          "rotation": [0, -1.570796, 0]
        },
        {
          "type": "plane",
          "name": "ground_plane5",
          "scale": [2, 2, 1],
          "location": [-2, 0, 2],
          "rotation": [0, 1.570796, 0]
        },
        {
          "type": "plane",
          "name": "light_plane",
          "location": [0, 0, 10],
          "scale": [3, 3, 1]
        }
        ]
      }
    },
    {
      "module": "manipulators.EntityManipulator",
      "config": {
        "selector": {
          "provider": "getter.Entity",
          "conditions": {
            "name": '.*plane.*'
          }
        },
        "cp_physics": False,
        "cp_category_id": 333
      }
    },
    {
      "module": "materials.MaterialManipulator",
      "config": {
        "selector": {
          "provider": "getter.Material",
          "conditions": {
            "name": "light_plane_material"
          }
        },
        "cf_switch_to_emission_shader": {
          "color": {
            "provider": "sampler.Color",
            "min": [0.5, 0.5, 0.5, 1.0],
            "max": [1.0, 1.0, 1.0, 1.0]
          },
          "strength": {
            "provider": "sampler.Value",
            "type": "float",
            "min": 3,
            "max": 6
          }
        }
      }
    },
    {
      "module": "loader.CCMaterialLoader",
      "config": {
        "folder_path": "<args:3>"
      }
    },
    {
      "module": "materials.MaterialRandomizer",
      "config": {
        "randomization_level": 1,
        "mode": "once_for_all",
        "manipulated_objects": {
          "provider": "getter.Entity",
          "conditions": {
            "name": "ground_plane.*"
          }
        },
        "materials_to_replace_with": {
          "provider": "getter.Material",
          "conditions": {
            "cp_is_cc_texture": True
          }
        }
      }
    },
    {
      "module": "object.ObjectPoseSampler",
      "config": {
        "objects_to_sample": {
          "provider": "getter.Entity",
          "conditions": {
            "cp_physics": True
          }
        },
        "pos_sampler": {
          "provider":"sampler.Uniform3d",
          "min": {
            "provider": "sampler.Uniform3d",
            "min": [-0.3, -0.3, 0.2], # [-0.3, -0.3, 0.0]
            "max": [-0.2, -0.2, 0.2] # [-0.2, -0.2, 0.0]
          },
          "max": {
            "provider": "sampler.Uniform3d",
            "min": [0.2, 0.2, 0.4],
            "max": [0.3, 0.3, 0.6]
          }
        },
        "rot_sampler":{
          "provider":"sampler.UniformSO3",
          "around_x": True,
          "around_y": True,
          "around_z": True,
        }
      }
    },
    {
      "module": "object.PhysicsPositioning",
      "config": {
        "min_simulation_time": 3,
        "max_simulation_time": 10,
        "check_object_interval": 1,
        "solver_iters": 25,
        "steps_per_sec": 100,
        "friction": 100.0,
        "linear_damping": 0.99,
        "angular_damping": 0.99,
        "objs_with_box_collision_shape": {
          "provider": "getter.Entity",
          "conditions": {
            "name": "ground_plane.*"
          }
        }
      }
    },
    {
      "module": "lighting.LightSampler",
      "config": {
        "lights": [
        {
          "location": {
            "provider": "sampler.Shell",
            "center": [0, 0, 0],
            "radius_min": 1,
            "radius_max": 1.5,
            "elevation_min": 5,
            "elevation_max": 89,
            "uniform_elevation": True
          },
          "color": {
            "provider": "sampler.Color",
            "min": [0.5, 0.5, 0.5, 1.0],
            "max": [1.0, 1.0, 1.0, 1.0]
          },
          "type": "POINT",
          "energy": 200
        }
        ]
      }
    },
    {
      "module": "camera.CameraSampler",
      "config": {
        "cam_poses": [
        {
          "proximity_checks": {
            "min": 0.3
          },
          "excluded_objs_in_proximity_check":  {
            "provider": "getter.Entity",
            "conditions": {
              "name": "ground_plane.*",
              "type": "MESH"
            }
          },
          "special_objects": ["obj_000006"],
          "resolution_x": 640,
          "resolution_y": 480,
          "cam_K": [572.4114, 0, 325.2611, 0, 573.57043, 242.04899, 0, 0, 1],
          "number_of_samples": 10,
          "location": {
            "provider": "sampler.Shell",
            "center": [0, 0, 0],
            "radius_min": 0.41,
            "radius_max": 1.24,
            "elevation_min": 5,
            "elevation_max": 89,
            "uniform_elevation": True
          },
          "rotation": {
            "format": "look_at",
            "value": {
              "provider": "getter.POI",
              "selector": {
                "provider": "getter.Entity",
                "conditions": {
                  "type": "MESH",
                  "cp_bop_dataset_name": "<args:1>",
                },
                "random_samples": 3
              }
            },
            "inplane_rot": {
              "provider": "sampler.Value",
              "type": "float",
              "min": -0.7854,
              "max": 0.7854
            }
          }
        }
        ]
      }
    },
    {
      "module": "renderer.RgbRenderer",
      "config": {
        "samples": 3,
        "image_type": "JPEG",
        #"render_normals": True,
        #"normal_output_key": "normals",
        "render_depth": True,
        "depth_output_key": "depth"
      }
    },
    {
      "module": "renderer.SegMapRenderer",
      "config": {
        "map_by": "instance"
      }
    },
    {
      "module": "writer.BopWriter_custom",
      "config": {
        "append_to_existing_output": True
        }    
    },
    {
      "module": "writer.BopWriter",
      "config": {
        "append_to_existing_output": False
        }    
    },
    {
      "module": "writer.Hdf5Writer",
      "config": {
        "append_to_existing_output": True
        }
    } 
  ]
}

Some more examples:
Figure_3
Figure_3_1

Figure_4
Figure_4_1

How to generate accurate convex hulls of concave blender objects for physics positioning?

Hi,

Thanks for the great project! While I was trying to use physics positioning to drop objects into a box, I ran into the issue of objects floating above the box opening.

image

By setting the "collision_shape" to MESH inside the physics positioning module, the issue was somewhat solved (the objects fell into the box). However, it took way too long to simulate the collision, and the collision was not accurate either. As can be seen in the pic below, objects even got partly merged into the box.

image

I was wondering if you have any suggestions on handling this problem.

Thank you!

Missing textures - SceneNet example

I have encountered a problem when trying to render the given SceneNet ".obj" models. Several materials within these models are not contained in the texture folder. I downloaded the textures as was suggested from http://tinyurl.com/zpc9ppb. The problem is that SceneNetLoader.py breaks when a corresponding texture for a given material is not found. I solved that by creating the "unknown" folder with some texture. Can you please update the SceneNetLoader.py?

set camera matrix K parameter ("cam_K") manualy

I'm trying to set the camera matrix K manual in the config.yaml. For me it seems, that the pipeline is not processing the given input correctly.

Config:

"module": "camera.CameraSampler",
      "config": {
        "cam_poses": [
        {
          "resolution_x": 640,
          "resolution_y": 480,
          "cam_K": [572.4114, 0, 325.2611, 0, 573.57043, 242.04899, 0, 0, 1],
          ...

Expected output from BopWriter in camera.json:

{
  "cx": 325.2611082792282,
  "cy": 242.04899594187737,
  "depth_scale": 0.1,
  "fx": 572.4114,
  "fy": 573.57043,
  "height": 480,
  "width": 640
}

Problem:
The "cx" and "cy" values are updated inside the saved camera.json.
The "fx", "fy", "hight" and "width" are not.

{
  "cx": 325.2611082792282,
  "cy": 242.04899594187737,
  "depth_scale": 0.1,
  "fx": 2992.14,
  "fy": 3003.49,
  "height": 960,
  "width": 1280
}

The images are rendered in 480x640 resolution, so the parameters (at least resulution_x, resolution_y) are processed correctly by blender.

If you uncomment this two lines in the BopWriter, the width and hight get written correctly to the camera.json file.

  1. However, I can't follow the code to write "fx", "fy" correctly to the file. Anyone can help here?
  2. Another thing I ask myself is, how to prove if blender is using these parameters instead of just writing it to a file?

How to obtain diffuse albedo map and specular roughness map?

Hi, thanks for your contribution. The open source high-quality renderer is of great significance for academic research. I have two questions to ask.
Is the rendering algorithm used by the tool metropolis light transport (MLT) or physical-based rendering, or others?
How to get information about other attributes, such as albedo, specular map, etc.?

How to set output image size and camera intrinsic by myself?

Hi I am new to blenderproc and find it hard to make it "under control". I want to use blenderproc as a synthetic object-pose dataset, but it's hard for me to find the exact arguments for image size and intrinsic. Is there a way to render a pose with simply:

  1. camera intrinsic
  2. object pose
  3. mesh model, e.g. *.obj or *.ply
  4. image size

An example is bop_renderer, where only 4 arguments are needed and really easily "programmable".

There are discrete points between the object and the background

I generated some depth images by examples in BlenderProc4BOP and converted these depth image to point cloud . The code is as follows:

import open3d as o3d
__all__ = [o3d]
print(o3d.__version__)

import numpy as np
depth_image_path = " "
depth_raw = o3d.read_image(depth_image_path)
image_width = 640
image_height = 480
fx = 572.4114
fy = 573.57043
cx = 325.2611
cy = 242.04899

intrinsic = o3d.PinholeCameraIntrinsic(
    image_width, image_height, fx, fy, cx, cy)
extrinsic = np.identity(4)
print(extrinsic)
depth_scale = 10
depth_trunc = 30000.0

np_depth = np.array(depth_raw)
depth_raw = o3d.geometry.Image(np_depth)

pcd = o3d.create_point_cloud_from_depth_image(
    depth=depth_raw, intrinsic=intrinsic, extrinsic=extrinsic,
    depth_scale=depth_scale, depth_trunc=depth_trunc, stride=int(1))

o3d.visualization.draw_geometries([pcd])

The results is as follows:
图片
图片
Some points appear between the object and the background,this is not realistic.
Why does this happen?
@thodan

colour for replica-dataset

I am trying to render colour and depth on the replica dataset.

Similar to the basic example, I added:

    {
      "module": "renderer.RgbRenderer",
      "config": {
        "output_key": "colors",
        "samples": 350
      }
    },

to the configuration to render colour.

The resulting colour image seems to be mostly black with some shades of white:
BlenderProc_replica_colour

Any ideas how I can properly render colour?

How to render vertex color instead of materials color?

In LineMod and Occlusion LineMod dataset, the 3D model is provide as .ply or .obj files without material files, color is stored in vertices. It seems that blender can render vertex color instead of material color, but I have tried and no results.
Will vertex color paint be supported in the future?

Plans to package project

Are there any plans to package the project so that it can be installed via pip? I feel like it would be easier to manage projects if you could just import modules as well as maintain consistency with the python ecosystem.

Here is a really rough idea of how the package could be used.

from blenderproc import Config, camera, main, renderer, writer

# create the config
config = Config(
    main.Initializer(),
    camera.CameraSample(num_cameras=5),   
    renderer.RgbRenderer(),
    writer.Hdf5Writer()
)
runner = config.build("./blenderproc_output")

# run the config and generate the output Hdf5 files.
runner.run()

# ... train model after file generation, visualize, perform other metrics.

To maintain backwards capability you could also add config loading capability

config.load_config("./config.yaml")

I'm interested in implementing the above if possible. I'm imagining it will take a significant change in file structure to meet the python specs so some coordination will be required.

"min_interest_score" parameter does not seem working in SceneNet example

Hi, I have faced the following issue when rendering with the SceneNet example.

My goal was to render with SceneNet example scenes of a living room ("living_room_33.obj"). An initial result with the default values of parameters in the "config.yaml" file was not satisfying for me. This was because a large proportion of calculated poses and corresponding rendered images depicted detailed views on walls only but the scene (a context of the scene) with different objects was not visible. First, I tried to solve this by including the "min_interest_score" parameter. I set this parameter to be 0.4. However, after the model was loaded and the poses were estimated, the pipeline froze (I waited for few hours but the output from a console was kept at: Trying a min_interest_score value: 0.400000). Second, I tried to increase the value of the "proximity_checks" parameter. I modified the minimum of this parameter from 1 to 2. This helped to render scenes where a camera was further away from the walls. However, it increased a probability to render images with a wrong pose presumably since some of the images were rendered undefined (the "img" field contained "nan" values only). I think that I can solve these problems by coding an external script which will only allow saving rendered images with some minimal depth observed on depth images. However, I would like to ask whether the problems that I faced can be rather fixed by modifying the configuration ".yaml" file or whether there is some problem in your pipeline which can be fixed. A circular trajectory of the camera would not be a solution for me since my goal is to render images of the scene from random camera positions inside the model.

Thank you for your help.

CocoAnnotationsWriter generating different categories across breaking "append_to_existing_output" option

I am trying to run CocoAnnotationsWriter repeatedly to produce a dataset, however sometimes an exception is raised "The existing coco annotations file contains different categories/objects than the current scene. Merging the two lists is not implemented yet" even though I haven't changed the dataset. I would expect that running the config would generate the same categories as without this functionality there wouldn't be much use is using "append_to_existing_output".

If this is expected behavior what should I do to prevent this from happening? I would like to execute this config N times and get N*number_of_samples annotated images contained in ./testing_multi/output/coco_data/coco_annotations.json where number_of_samples comes from the argument of camera.CameraSampler. See the config below for where these values are coming from.

Reproduce

To reproduce go into the config of the coco_annotations example and replace the camera.CameraLoader with

    {
      "module": "camera.CameraSampler",
      "config": {
        "cam_poses": [
          {
            "number_of_samples": 2,
            "location": {
              "provider":"sampler.Uniform3d",
              "max":[10, 10, 8],
              "min":[-10, -10, 12]
            },
            "rotation": {
              "format": "look_at",
              "value": {
                "provider": "getter.POI"
              },          
              "inplane_rot": {
                "provider": "sampler.Value",
                "type": "float",
                "min": -0.7854,
                "max": 0.7854
              }
            }
          }
        ]
      }
    },

and also enable appending annotations

    {
      "module": "writer.CocoAnnotationsWriter",
      "config": {
        "append_to_existing_output": True,
      }
    }

My full config is below, I've also hardcoded the arguments used in the example

# Args: <cam_file> <obj_file> <output_dir>
{
  "version": 3,
  "setup": {
    "blender_install_path": "/home_local/<env:USER>/blender/",
    "pip": [
      "h5py",
      "scikit-image"
    ]
  },
  "modules": [
    {
      "module": "main.Initializer",
      "config": {
        "global": {
          "output_dir": "./testing_multi/output"
        }
      }
    },
    {
      "module": "loader.BlendLoader",
      "config": {
        "path": "./testing_multi/scene.blend",
        "load_from": "/Object"  # load all objects from the scene file
      }
    },
    {
      "module": "manipulators.WorldManipulator",
      "config": {
        "cf_set_world_category_id": 0  # this sets the worlds background category id to 0
      }
    },
    {
      "module": "lighting.LightLoader",
      "config": {
        "lights": [
          {
            "type": "POINT",
            "location": [5, -5, 5],
            "energy": 1000
          }
        ]
      }
    },
    {
      "module": "camera.CameraSampler",
      "config": {
        "cam_poses": [
          {
            "number_of_samples": 2,
            "location": {
              "provider":"sampler.Uniform3d",
              "max":[10, 10, 8],
              "min":[-10, -10, 12]
            },
            "rotation": {
              "format": "look_at",
              "value": {
                "provider": "getter.POI"
              },          
              "inplane_rot": {
                "provider": "sampler.Value",
                "type": "float",
                "min": -0.7854,
                "max": 0.7854
              }
            }
          }
        ]
      }
    },
    {
      "module": "renderer.RgbRenderer",
      "config": {
        "output_key": "colors"
      }
    },
    {
      "module": "renderer.SegMapRenderer",
      "config": {
        "map_by": ["instance","class"],
      }
    },
    {
      "module": "writer.CocoAnnotationsWriter",
      "config": {
        "append_to_existing_output": True,
      }
    }
  ]
}

Running multiple times should raise the error previously mentioned and replicated below. You might need to delete the dataset and reattempt a few times to get the problem to occur.

"The existing coco annotations file contains different categories/objects than the current scene. Merging the two lists is not implemented yet."

Which occurs raised at line 102 of src/utility/CocoUtility.py

Problems with running the basic example

When i do run the example code, it is installing the blender version etc. until it gives me this error:

File "run.py", line 179, in
installed_packages_name, installed_packages_versions = zip(*[str(line).lower().split('==') for line in installed_packages.splitlines()])
ValueError: not enough values to unpack (expected 2, got 0)

I did try a workaround, skipping those package installations but then I get stuck with a 'No module named yaml' error, even though I installed it. If you have an idea how to tackle this problem please tell me.

I do run the whole thing in a conda environment with python 3.7 .

Thanks in advance,
Timon

How to render vertex colors on the mesh model

Hi,
I modified the camera_sampling example to render some images of my mesh model generated by open3d.integration.ScalableTSDFVolume and got some grayscale images.

I found It is because my model only has vertex colors, and try the following module.

    {
      "module": "manipulators.MaterialManipulator",
      "config": {
        "selector": {
          "provider": "getter.Material",
          "conditions": {
            "name": "scene.*"
          }
        },
        "cf_change_to_vertex_color": "Col"
      }
    },

However, it shows me the error message "./src/manipulators/MaterialManipulator.py:137: UserWarning: Warning: No materials selected inside of the MaterialManipulator warnings.warn("Warning: No materials selected inside of the MaterialManipulator"

Do I miss some steps required?

Generating synthetic dataset for training on Mask R-CNN

I would like to train Mask R-CNN to recognize a specific object based on a 3D model. I want to produce a synthetic dataset similar to the output of "bop_object_on_surface_sampling" example. But I need to include my own 3D model and produce ground truth masks for Mask R-CNN training, and I'm not sure how to do that. Can you guide me in the right direction?

Best regards, Mikkel

Get coco_annotations on my scene.

I am trying to get the COCO annotations on my scene in blender, I am creating my scene from a python script but when I export the scene as .obj file the names of the objects change, How do I save the .obj file as same name set in Blender.

Change config to install blender in specific environment

Hi

I want blender to be installed in a specific python environment that i created

using python3 -m venv /path/to/new/virtual/environment

How do i change "blender_install_path": "/home_local/env:USER/blender/", in config.yaml file

my environment is present at , please help me out

"blender_install_path": "/home/ubuntu/Synthetic_blender/blender/",

is this correct ?

Issue running examples

Hello,
i am trying to run your basic example. But i get this error when running it
image

Any ideas on that ?

Thanks in advance.

Time for rendering 1 image ?

Hello,
I had generated images by Blender in order to provide datasets for training deep learning model (classification, instance segmentation, pose, ...) with Physics integration, object randomly placing...

But my biggest problem is about the time for rendering 1 image. For me, it takes about 1 min.
Iam so surprised about your rendering time, about 50 images in 1 min ( 3000 images in 1 hr)
I was also using Cycle mode for rendering.

Can you give me some info about the image resolution and maybe other things to achieve that amount of time ?

Thanks in advance.

ObjectStateWriter is not writing content to file

First of all, thanks for this awesome repo!

Problem:
I have an issue with the ObjectStateWriter.
When I use the the writer as shown below in further information, there is no output file in the output directory, nor is the information written into the .hdf5 file. At least there is no difference between the .hdf5 file with and without the ObjectStateWriter.
EDIT: For some reason, which I don't understand yet, the object states now get written to the .hdf5 file. However, the .npy file is still not wirtten into the output dir, when the Hdf5Writer is not configured in the config.yaml.

Expected behavior:
According to the ObjectStateWriter class I would expect that the information is written to the .hdf5 file. And if there is no Hdf5Writer, I would expact the output gets written to a .npy-numpy file in the output directory.

class ObjectStateWriter(StateWriter):
    """ Writes the state of all objects for each frame to a numpy file if no hfd5 file is available. """

Further information:

For me it looks like that the file is written into an intermediate save path. For me something like this:
/dev/shm/blender_proc_XXXXX/object_states_0000.npy
But after the code finishes it gets deleted (of cause).

Snippet of the config.yaml

    {
      "module": "renderer.RgbRenderer",
      "config": {
        "samples": 50
      }
    },
    {
      "module": "renderer.SegMapRenderer",
      "config": {
        "map_by": "instance"
      }
    },
    {
      "module": "writer.BopWriter"
    },
    {
      "module": "writer.ObjectStateWriter"
    },    
    {
      "module": "writer.Hdf5Writer"
    },
    {
      "module": "writer.CocoAnnotationsWriter"
    }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.