GithubHelp home page GithubHelp logo

ln-12 / blainder-range-scanner Goto Github PK

View Code? Open in Web Editor NEW
116.0 116.0 25.0 31.28 MB

BlAInder range scanner is a Blender add-on to simulate Lidar and Sonar measurements. The result can be saved as annotated 2D image or 3D point cloud.

License: GNU General Public License v3.0

Python 100.00%

blainder-range-scanner's People

Contributors

apalkk avatar ln-12 avatar ybachmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blainder-range-scanner's Issues

CSV exporter writes undesired blank spaces

Describe the bug
When I run a sonar scan on sonar_example.blend and output to CSV, the rows contain a bunch of blank spaces. This breaks trying to read the CSV with numpy.loadtxt. I worked around this by passing the converters argument to loadtxt.

...$ head -n 2 sonar_example_frame_300.csv 
c a t e g o r y I D ; p a r t I D ; X ; Y ; Z ; d i s t a n c e ; X _ n o i s e ; Y _ n o i s e ; Z _ n o i s e ; d i s t a n c e _ n o i s e ; i n t e n s i t y ; r e d ; g r e e n ; b l u e ;
0 ; 0 ; 1 . 5 7 0 ; 4 . 9 4 7 ; 0 . 0 6 9 ; 2 . 8 7 0 ; 1 . 5 7 0 ; 4 . 9 4 7 ; 0 . 0 6 9 ; 2 . 8 7 0 ; 0 . 6 4 7 ; 0 . 8 0 0 ; 0 . 5 0 8 ; 0 . 0 7 7

To Reproduce

Run a sonar scan on sonar_example.blend and output to CSV,

Expected behavior

no unnecessary spaces

Desktop (please complete the following information):

  • OS: Ubuntu 20.04
  • Blender version: 3.0.0

Additional context

I think the problem is with the "%.3" format notation in the CSV exporters code.

Animation - Modifiers are removed when starting simulation

Description
I want to scan an animated tree with Blender. The tree was created with Sapling Tree Gen as curve, then an armature is created and the curve is created to mesh and wind animation is added. Before clicking "generate point clouds", the animation works. However, my simulated point clouds do not show any animation effects and after the simulation, my tree mesh is suddenly static, i.e., the windSway and Skin modifiers are missing.

To Reproduce
Steps to reproduce the behavior:

  1. Activate the sapling tree gen add-on (Edit -> Preferences -> Add-ons).
  2. Add a tree: Type Shift + A -> Curve -> Sapling Tree Gen
  3. In the GUI, change to "Settings: Armature" and activate "Use Armature" and "Make Mesh"
  4. Change to "Settings: Animation" and activate "Armature Animation"
  5. In the View Layer, collapse the "treeArm" object, select the "treemesh" object and assign a material.
  6. Set the camera to look at the tree.
  7. Configure the settings in the Blainder Scanner Add-on (here: Generic lidar, rotating, etc.). Make sure to "Enable animation".
  8. Click "Generate point clouds"

Expected behavior
The generated point cloud clearly show that the tree was moving, i.e., the point clouds from the different frames are different. Furthermore, my animated scene object stays the way it is before the Blainder simulation (with the wind modifier, etc.)

Desktop (please complete the following information):

  • OS: Windows 10
  • Blender version: 3.3

return array of getTargetMaterials may include 'None' elements

Describe the bug
If a material has a 'Material Output' node but there are no inpu nodes for the 'Material Output' node, the getTargetMaterials function in material_helper.py will return an array that contains a None value for this material.
This is because in the function the value of 'links' will be an empty tuple and therefore in the loop 'for link in links:' no material will be set at the current targetMaterials[materialIndex] spot.

Later on in the scanning process this causes the following exception:
File "C:\Users\Yannic\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\range_scanner\material_helper.py", line 104, in getMaterialColorAndMetallic if material.texture is not None: AttributeError: 'NoneType' object has no attribute 'texture'

To Reproduce
Steps to reproduce the behavior:

  1. Create a material with only an 'Material Output' Node:
    image
  2. Assign the material to an object and start a scan.

Expected behavior

  • Output an error message that tells the user that the material (and what material exactly) is faulty.
  • Optionally: Ignore faulty materials and continue scanning anyway.

Desktop (please complete the following information):

  • OS: Windows
  • Blender version: 3.6.1

If you want I can create a fix for this and make a pull request.
Adding a condition in the getTargetMaterials function to check if links == () or len(links) == 0 should do the trick.

Blainder crashing when modifiers present in scene

In our research we perform some blainder scannings of IFC files imported with Blender BIM. In some cases, blainder crashes when hitting "Generate scans" button not performing scans. This seems to be related to modifiers being present in the scene.

Could you please clarify it this is an actual bug or whether this behaviour is expected and modifiers should thus not be used for scanning with blainder?

Steps to reproduce the behavior:

  1. Open minimum example file: https://seafile.rlp.net/d/f2a5db7f0fa043daafc0/
  2. Set path and file name for point cloud
  3. Hit 'Generate point cloud'
  4. See error

Expected behavior
Blainder performing scan according to configuration and saving output files to specified path.

Error

location: :-1
Error: Python: Traceback (most recent call last):
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/ui/user_interface.py", line 1637, in execute
performScan(context, dependencies_installed, properties)
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/ui/user_interface.py", line 1387, in performScan
modifyAndScan(context, dependencies_installed, properties, None)
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/ui/user_interface.py", line 1284, in modifyAndScan
generic.startScan(context, dependencies_installed, properties, objectName)
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/scanners/generic.py", line 228, in startScan
bpy.ops.object.modifier_apply(apply_as='DATA', modifier=modifier.name)
File "/home/kaufmann/blender-2.93.5-linux-x64/2.93/scripts/modules/bpy/ops.py", line 132, in call
ret = _op_call(self.idname_py(), None, kw)
TypeError: Converting py args to operator properties: : keyword "apply_as" unrecognized

location: :-1

Environment:

  • OS: Ubuntu 20.04 LTS
  • Blender 2.93 LTS

ViewLayer does not contain object

Hello there,

I wanted to prepare different layers for sensor simulation, so I placed Collection.001 on one layer and Collection.002 in a different layer, see attached picture (still active in the View Layer)
grafik

No then I am executring the generation of point cloud for one scene, it does not work and I get the error that the other cube is not in the scene

grafik

Collection.001 and Collection.002 contain excactly the same.
Is this a bug or meant to be? If meant to be, that's alright, just wanted to check in.

How to increase contrast for depthmap?

To whom this may concern,

I am currently exploring the use of this software to simulate a drone's depth image scan of a road with potholes in blender for a university project.

As such, I have modeled a simple meshgrid with craters in it to represent a road with pothole defects, and positioned the camera directly above it (to represent a top down drone's eye view of the road) like so:
image

I have applied the same material on the meshgrid object as the one applied to the original cube in the example file "script_usage.blend".

Using the function for static scans, I have been able to successfully scan the meshgrid and output the resulting depthmap as an image to a desired file location:
image

However, the problem is that, in keeping with real world dimensions, I have set the depth of the potholes to be quite shallow wrt. the scene dimensions, with depths ranging from 25 to 75mm.
image

Therefore, there is a great lack of contrast in the resulting depth image:
image

I have verified that the LIDAR capabilities are working by using it to create depthmaps of other objects (like the default arrangement in "script_usage.blend").

Is there any way that I can tweak the source code to increase the contrast/sensitivity of the scan so that the difference in depth, although small, could be more apparent? Such that it looks like the result of a blender Z-pass render? An example blender-rendered depth image showcasing potholes of a similar depth, created using the same scene as shown above in the first screenshot, can be found in the image below:
image

I have tried looking around the source code for a bit but unfortunately it is beyond the scope of my undergraduate knowledge and problem solving ability.

Best regards

Export point clouds in more commonly used file formats

Currently it is only possible to export points clouds in the following formats:

  • .las
  • .hdf
  • .csv

It would be really convinient, if it was possible to export in other formats (.obj, .ply, ...).
The generated blender objects for visualization can't currently be exported using the built in blender export functions.

Either add more export options in the "scanner window" of the add-on or enable beeing able to export the visualizations via existing blender export functionality.

Update BlAINder for Version 3.3

"It is recommended to use Blender 2.93 LTS. The next LTS will be available with Blender 3.3 (see here) for which the add-on will be updated. Support for that version is prepared in this branch. Feel free to open an issue if you face problems with Blender 3.x while using that branch.
WARNING: DO NOT install the addon via both ways or the two versions are mixed up and cause errors."

Is there already an updated version?

Customize LiDAR's beam distribution angle

Hi, can I customize the angle of the LiDAR beam in the vertical direction? For example, the vertical beam distribution like RS-Ruby Lite. What should I do? Thanks for your help!

Ocean Modifier being Disabled

I'm not sure if this is a bug or is this a requirement for the range scanner to function, I'm attempting to simulate a sonar scan that covers up to where the ocean surface meets structure (splash zone). For this I'd like to have a dynamic sea surface so I've created a plane with the ocean modifier, but every time I "generate point clouds" it seems the ocean modifier gets removed. and the scan only reflects a single frame from the distorted plane rather than a surface that changes frame by frame. Otherwise fantastic software!

Is this modifier being removed by design?

Thanks!

image

ToF Sensor simulation distortion, when adding Gaussian noise

When I add Gaussian noise to the simulation of a ToF sensor (scanner type = static), there is noise, but it also seems that everything is projected onto a sphere centered at the camera's origin. Where does this distortion come from?

Thanks.

AttributeError: 'Scene' object has no attribute 'scannerProperties'

Hi!
I am trying to use your toolbox for my application and I would like to utilize your scripts. In order to do that I have tried to run the script_usage.blend file. However, I am getting the following error.

File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\scripts\addons_contrib\range_scanner\ui\user_interface.py", line 1565, in scan_static properties = scene.scannerProperties AttributeError: 'Scene' object has no attribute 'scannerProperties'

Since I am pretty new to Blender scripting I couldn't quite figure out the issue.
I would appreciate if someone can guide me in the right direction.
Thanks!

Installation of the add-on

I tried to install the add-on using the provided way, which is:
(Copy the range_scanner folder to C:\Program Files\Blender Foundation\Blender 2.83\2.83\scripts\addons_contrib (Windows).)

However, when I try to activate the add on in Blender, this message appears,
error

How it can be solved?

Regards.

Csv file with random CategoryID and partID

I am currently working with blender and i am trying to use a the BLAINDER add-on in order to simulate a lidar in a scene and get the point cloud data in the csv file.
My problem is even when i assign a categoryID and a partID in the custom property to different objects and different parts of the same object i still get some random number in the csv file that don't make any sense.
I want the data to be labeled in the csv file according to the custom property that i assign.
Any help will be appreciated

Compatability issue with Blender 4.0

Hi,
I am trying to install Blainder for my project that was built using blender version 4.0. I am using Ubuntu and I tried to install the add on using terminal but I cannot see the add on and use it in my project.
before that I tried to install it using GUI through add-on and I was able to add the scanner tab but when I click on it I see a massage asking to install dependencies. when I click on it I have this error :
Command '['/snap/blender/4300/4.0/python/bin/python3.10', '-m', 'pip', 'install', 'Jinja2==3.0.2']' returned non-zero exit status 1.
Is it mandatory for this to work that I use same blender version (3.3) and python 3.9?

I need your guidance on this please

Issue with custom mesh.

Hi!
I would like to know if it's possible to make a scan from every kind of scene?
I'm trying to use the "lego scene" blender file from the original "nerf" dataset (file available here : https://drive.google.com/file/d/1yDB7Pmo2KSPw2d0J7E6FHvb-sU0DdTvX/view?usp=sharing), but it's not working
I'm interested about the X,Y,Z and the intensity of every scan, can you help me?

My setup : the standalone blender 2.93 with the add-on (working well with the example scene).
I first did "make single user" on the mesh (I had an issue with it)
Then i had this error :
"closestHit.color = materialProperty.color
AttributeError: 'NoneType' object has no attribute 'color'"
Do you know if there is any hack in order to make the scan possible? (

Thank you very much for your help.
ps : Thanks for sharing your code,
it would be very helpful for me if I can make this scan.

Background operation or Context override raises AttributeError

Describe the bug

The add-on crashes with Blender in background mode or when overriding the context. Both use cases raise AttributeError when calling the scan_static function (and presumably others).

This precludes headless, batch processing of .blend files.

To Reproduce

For a script named script.py that calls scan_static, and for a .blend file with the range_scanner add-on enabled, run the script from the command line like this: blender -b <blend file> -P script.py

You should see a traceback like this:

Traceback (most recent call last):
  File <script.py>, line 5, in <module>
    range_scanner.ui.user_interface.scan_static(
  File ".../range_scanner/ui/user_interface.py", line 1621, in scan_static
    performScan(context, dependencies_installed, properties)
  File ".../range_scanner/ui/user_interface.py", line 1387, in performScan
    modifyAndScan(context, dependencies_installed, properties, None)
  File ".../range_scanner/ui/user_interface.py", line 1284, in modifyAndScan
    generic.startScan(context, dependencies_installed, properties, objectName)
  File ".../range_scanner/scanners/generic.py", line 305, in startScan
    mode = bpy.context.area.type
AttributeError: 'NoneType' object has no attribute 'type'

Trying to avoid that problem by using a context override yields the following traceback:

Traceback (most recent call last):
  File <script.py>, line 29, in <module>
    range_scanner.ui.user_interface.scan_static(
  File ".../range_scanner/ui/user_interface.py", line 1564, in scan_static
    scene = context.scene
AttributeError: 'dict' object has no attribute 'scene'

The problem here is that the code expects a bpy_types.Context instance, but an overridden context is a dictionary.

Expected behavior

The add-ons in Blender's core all support overriding the context.

Desktop (please complete the following information):

  • OS: Ubuntu 20.04
  • Blender version: 2.93.6

Export sensor poses (location and rotation)

Thanks for the nice tool.
I create a curve for the camera animation and create many frames (Blender autokeying) along the curve for a smooth sensor movement (animation) and skip every 2 to 5 frames.
Is it possible to export the sensor trajectory including the sensor poses (locations, rotations)? How can we do that?

How to export the 3D point cloud of the scene?

Hi, thank you very much for your work, it's great and very helpful for my research. I am having a problem and would like to get your help. Now that the plugin can export the scan image and depth map of the side scan sonar, I would like to know how to export the 3D point cloud map of this scene, which is the height information and plane coordinates of the scene.

render error

i just open scence/sonar_example.blend
after click Render Animation,logs show
Render error (No such file or directory) cannot save: 'D:\Projekte\Masterarbeit\range_scanner\image_1.png0001.png'do nothing
i am sure that set the export directory is the local directory
Is there a necessary step that has not been done?

generate point cloud causes memory leak

**range scanner addon cause memory leak on Mac os when I click 'generate point cloud' **

To Reproduce
Steps to reproduce the behavior:

  1. open up the blender file inside.
  2. Click on one of the scene cameras in the outliner, and go into camera view.
  3. Set it as scanning object.
  4. click generate point cloud botton on the scanner panel.

Expected behavior
It should generate a set of point. But Blender goes no-responsive for a while, and the memory usage keep increasing until 100GB and more, the system gives a warning about the system running out of app memory. need to force quit Blender.

Screenshots

Screen Shot 2023-06-17 at 11 47 08

Desktop (please complete the following information):

  • OS: [Mac OS Monterey--12.3]
  • Blender version [3.3]

Additional context
It doesn't happen in all scenes, it tends to occur in larger scenes which contain more objects (but not sure about this correlation), I need to simulate e driving environment lidar scan. I tried to reduce the resolution or resolution scale, doesn't seems to help.

The file size is beyond the limit, I now share it with google drive. so here is the file, https://drive.google.com/file/d/1EW7QjIgdqLoUigjzD_oIHWHurSIZEJ3x/view?usp=sharing

Trouble reproducing examples: RGB Values in tof (Kinect v2) scan

Hi,
I try to reproduce the examples from the repo. Especially, I am interested in RGB values in the point clouds from materials / textures in Blender. I tried the following, based on example_scenes/part_segmentation_and_image_rendering.blend:

  • Change material of one chair to image texture, see
    blender_chair_image_texture
  • perform scan with tof Kinect v2 scanner
  • expected result: RGB values from image texture in pointcloud
  • actual result:
    las_pointcloud_kinectv2
  • Even if I use a RGB color for the material, I get the RGB values as in the example.
    From a glimpse into the source code and the examples I figured, that the RGB scanning is acutally possible. Can you provide the correct material / texture configuration and properties to receive correct results?
    The .blend file and the used texture image can be downloaded from: https://seafile.rlp.net/d/f2a5db7f0fa043daafc0/ for reference.
    Some hints on this would be highly appreciated.
    Fabian

Scan Parameters

Hello, I was not able to post it as pull request....
Great work so far!
I would like to have a uneven vertical FOV, for example -15° and 25°.
Also would like to save my own presets as scanner.

Many thanks!

Add requirements.txt for manual pip installation of dependencies

Automatic search for dependencies did not work for me (tested Win10 and Ubuntu 20.04). When trying to install dependencies manually, I ran into an issue with laspy. By default, laspy2.x will be installed, but this is not compatible to las export in blainder.
I would suggest to add a requirements file to declare required versions: requirements.txt.
Note: I tested the las export with laspy 1.7.0, no other tests so far.

Unable to comprehend examples

I am trying to replicate classification examples from documentation with a fresh start. I import the chair model provided by you in the .obj format.
However, I see that the chair model is imported as a whole and not separately as mentioned in the example i.e. Legs and plate.
I would highly appreciate it if someone can nudge me in the right direction.
Thanks!

Ability for Channels

Hello,
in my opinion the laser simulation is wrong. In the following screenshots you can see (or assume^^) that there are 122 (0,40° FOV devided by 0,33° angular resolution) points verically:

grafik

The Velodyne Ultra Puck has only 32 Channels (https://velodynelidar.com/wp-content/uploads/2019/12/63-9378_Rev-F_Ultra-Puck_Datasheet_Web.pdf), which means the points should also be 32 vertically. I got following mail from the velodyne support, since i was really wondering....

"In the Ultra puck sensor you have 32 laser beam distributed in a non linear manner in these 40 degrees vicinity giving you 0.33 degrees resolution in the middle line and it grows as you go toward the outer line."

Am i mistaken?
The ability to go with the channels would be nice.

Regards
Thomas

Error when trying to install dependencies (Linux)

Bug Description: When installing the range scanner add-on it tells me there are missing dependencies. Upon hitting the "Install dependencies" button I receive the following output in the console:

"ERROR: Command '['/snap/blender/3082/3.4/python/bin/python3.10', '-m', 'pip', 'install', 'Jinja2==3.0.2']' returned non-zero exit status 1"

However, I can manually use that same python binary / pip to install Jinja2 and all of the other dependencies in the Readme. Despite being able to see them all listed in Blender's instance of pip (using pip list with the Blender python executable) the add-on still does not see them and still requires the "install dependencies" button to be pressed repeating the error.

When I run the subprocess command directly in Blender's console (command: subprocess.run([sys.executable, "-m", "pip","install", "Jinja2==3.0.2"],check=True) it does not error out and instead has "returncode=0". Despite this, the "missing dependencies" notice does not go away in Scanner tab.

To Reproduce:

  1. Snap install Blender (currently snap is installing Blender 3.4.0)
  2. Launch Blender using the terminal
  3. Download the Blainder repository
  4. Extract all files and zip range_scanner folder
  5. In Blender's add-on menu select the range_scanner zip folder
  6. Select "Scanner" from tab and press "Install dependencies"
  7. See output error in terminal

Expected Behavior:
I have used this add-on with Blender 2.93.9 on a Windows 7 machine and the add-on installs dependencies fine. I now need the developed pipeline to run on a Linux machine which is resulting in the error described above.

Desktop:
-OS: Linux 20.04.5
-Blender 3.4.0

Not compatible with ARM64 Macs

Describe the bug
Installation fails because of h5py.

To Reproduce
Install on M1 mac.

Expected behavior
No error.

Desktop (please complete the following information):

  • OS: macOS 12, M1
  • Blender version 2.93.5

Additional context
See: h5py/h5py#1810 and h5py/h5py#1981

Rendered object surfaces in point cloud

Hi!
First of all thank you very much for sharing your code here. It will be very useful for me as I want to use it to create a synthetic dataset for a machine learning application.

I noticed that the points in the point cloud have the color of the material itself. What I would like to use are the colors of the rendered object surfaces, so like if using the ViewPort Shading mode.
I would like to know if it is possible or what I would have to do to get a point cloud that has these information.

Thank you for your help and best regards

RenderSettings.resolution_percentage expected an int type

Great work with the plugin!

I was able to get things working in Blender 3.x with one small change to the code. It seems that "RenderSettings.resolution_percentage" used to allow a float, but now requires an int. My work around was to simply change lidar.py line 319 to the following:

scene.render.resolution_percentage = int(percentage)

Perhaps the UI could also be updated to reflect the type change.

I haven't run into any other issues besides this one so far. I would be willing to put together a pull request if that would be useful.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.