GithubHelp home page GithubHelp logo

Comments (34)

stevenwudi avatar stevenwudi commented on August 18, 2024 4

@Tbarowski , Here is favor:
Using your provided json file, we rendered the mesh as below which is different from what you get.
I have cleaned up a script as:
https://github.com/stevenwudi/6DVNET/blob/master/tools/ApolloScape_car_instance/demo_mesh.py
You probably only need the files from:
https://github.com/stevenwudi/6DVNET/tree/master/tools/ApolloScape_car_instance
Moreover, our team has visually screened all the training and validation images and manually circle the corrupted images out. Here is the list:
https://github.com/stevenwudi/6DVNET/tree/master/tools/ApolloScape_car_instance/split_delete
Best
Di Wu

Thanks for all your efforts!

Unfortunately I'm getting a different result with your code:
171206_034636094_camera_5_shrinked

Also, i am not able to run your code directly since I'm missing models as reported in #13. I'm using the models from 3d_car_instance_sample.tar.gz.

Is there any other source for the models? I'm quite sure that the error lies in the models.
How many models do you have / do you have files for the models that I miss (see other ticket).

I think the car models should be still accessible and you should ping the organisor for the issue.
This is my saved models:
https://www.jianguoyun.com/p/DfzeH4YQ9cyhBhiKgJkB
Best of luck

from dataset-api.

ThoBaro avatar ThoBaro commented on August 18, 2024 1

With your models i was able to generate the correct output. I'm wondering if there are multiple sources for the models, @ApolloScapeAuto you should definitely take a look at this!

The dataset 3d_car_instance_sample.tar.gz

Thanks @stevenwudi for your support.

from dataset-api.

pengwangucla avatar pengwangucla commented on August 18, 2024

It seems init GLEW is not necessary for successfully render a depth map. I have already update to rm the line of ERROR message. You may check it out again.

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

Hey, after fetch the update, I recompile the .so file. (I have Ubuntu 16.04 with Python 3.6)
However, exit code 139 (interrupted by signal 11: SIGSEGV) appears when executing render.renderMesh_py

from dataset-api.

ApolloScapeAuto avatar ApolloScapeAuto commented on August 18, 2024

Never met that problem. We worked with Ubuntu14.04 and python 2.7. So you mean excuting render.render_test.py ?

from dataset-api.

pengwangucla avatar pengwangucla commented on August 18, 2024

Is it possible due to some driver conflict ? Try to reinstall nvidia driver (I met segment fault when having matlab or nouveau driver)

from dataset-api.

ApolloScapeAuto avatar ApolloScapeAuto commented on August 18, 2024

@stevenwudi Did you solve the problem ?

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

No, unfortunately, by using Ubuntu 14.04, Python2.7 still generate (interrupted by signal 11: SIGSEGV) error. (Although we have NVIDIA-SMI 384.69 )

Could it be the Failed to initialize GLEW reason (uncommenting this line /* fprintf(stderr, "Failed to initialize GLEW\n"); */ out will print the original error message .

from dataset-api.

skylian avatar skylian commented on August 18, 2024

Could you check the error message returned by glewInit()?
For example, you can do something like this:

GLenum ret = glewInit();
if (ret != GLEW_OK) {
    fprintf(stderr, "Error: %s\n", glewGetErrorString(ret));
}

from dataset-api.

ApolloScapeAuto avatar ApolloScapeAuto commented on August 18, 2024

@stevenwudi If you can't fix it. May be you can use other renderer like open3d etc for rendering operation given the mesh model, camera intrinsic and pose.

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

Hey, I couldn't render the image with (interrupted by signal 11: SIGSEGV) error. ( open3d works by installing from pip, and I would say it is a nice rendering software).

However, I created the following script replacing the showAnn and render_car:
Gist: I have accumulated the mask for individual cars, the masks are required by firstly projecting the vertices to 2d using
imgpts, jac = cv2.projectPoints(np.float32(car['vertices']), pose[:3], pose[3:], self.intrinsic, distCoeffs=np.asarray([]))
and then drawing the edges by
cv2.polylines(mask, [pts], True, (0, 255, 0))

It almost makes sense to me (apart from some weird rotation and transnational misalignment.
Could you help me to check what's wrong? Many thanks!

    def showAnn(self, image_name, settings):
        """Show the annotation of a pose file in an image
        Input:
            image_name: the name of image
        Output:
            depth: a rendered depth map of each car
            masks: an instance mask of the label
            image_vis: an image show the overlap of car model and image
        """

        car_pose_file = '%s/%s.json' % (self._data_config['pose_dir'], image_name)
        with open(car_pose_file) as f:
            car_poses = json.load(f)
        image_file = '%s/%s.jpg' % (self._data_config['image_dir'], image_name)
        image = cv2.imread(image_file, cv2.IMREAD_UNCHANGED)[:, :, ::-1]

        intrinsic = self.dataset.get_intrinsic(image_name)
        image, self.intrinsic = self.rescale(image, intrinsic)

        merged_image = image.copy()
        mask_all = np.zeros(image.shape)
        for i, car_pose in enumerate(car_poses):
            car_name = car_models.car_id2name[car_pose['car_id']].name
            mask = self.render_car_cv2(car_pose['pose'], car_name, image)
            mask_all += mask

        mask_all = mask_all * 255 / mask_all.max()

        alpha = 0.5
        cv2.addWeighted(image.astype(np.uint8), 1.0, mask_all.astype(np.uint8), 1 - alpha, 0, merged_image)
def render_car(self,  pose, car_name, image):
        car = self.car_models[car_name]
        pose = np.array(pose)
        # project 3D points to 2d image plane
        imgpts, jac = cv2.projectPoints(np.float32(car['vertices']), pose[:3], pose[3:], self.intrinsic, distCoeffs=np.asarray([]))

        mask = np.zeros(image.shape)
        for face in car['faces'] - 1:
            pts = np.array([[imgpts[idx, 0, 0], imgpts[idx, 0, 1]] for idx in face], np.int32)
            pts = pts.reshape((-1, 1, 2))
            cv2.polylines(mask, [pts], True, (0, 255, 0))

        return mask

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

Here is one of the example images from training sequences:
180116_053947113_camera_5

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

It would be really great if we don't need 3rd-party rendering software (as I have done).
But the not strictly alignment of the mesh using my code above would make our algorithm not working properly. Hence, it will be greatly appreciated if someone can investigate the misalignment issue.
Many thanks!

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

Hey, I am wondering whether the misalignment could be caused by distorted image lens.
Hence, we need the distortion coefficients for the correction?

from dataset-api.

jianghe01 avatar jianghe01 commented on August 18, 2024

Hi steven, i ve seen your example image, than i guess u maybe use the car fitting result rotation pose as quaternion. i think u can try to use it as Eularian angle. hope can help u.

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

Hi @jianghe01 , thanks for the heads up.
After close examination, when I project the 3D vertices to 2D image plane, I should use rotation vector instead of the euler angles (yaw, pitch, roll).

Hence, using the following should correct the misalignment.

rmat = uts.euler_angles_to_rotation_matrix(pose[:3])
rvect, _ = cv2.Rodrigues(rmat)
imgpts, jac = cv2.projectPoints(np.float32(car['vertices']), rvect, pose[3:], self.intrinsic, distCoeffs=None)

If the mask of the mesh is required, we can draw the triangle lines as:

mask = np.zeros(image.shape)
for face in car['faces'] - 1:
    pts = np.array([[imgpts[idx, 0, 0], imgpts[idx, 0, 1]] for idx in face], np.int32)
    pts = pts.reshape((-1, 1, 2))
    cv2.polylines(mask, [pts], True, (0, 255, 0))

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

Here is the produced mesh overlay image, it's generally satisfactory enough (171206_034808931_Camera_5):

171206_034808931_camera_5-min

Nevertheless, after some examination, we found out that some car_id are rather different than the real car models, for example the closest car and the car under 40km/h sign (orientation issue) in (171206_034636094_Camera_5), is it normal?
171206_034636094_camera_5-min

from dataset-api.

pengwangucla avatar pengwangucla commented on August 18, 2024

@stevenwudi From those examples I think you got every thing right. How about the first image you showed with misalignment, is it the same with the example in the demo ? If it is, this might due to some error labelling.

from dataset-api.

XBSong avatar XBSong commented on August 18, 2024

@stevenwudi , Thanks for the feedback, we generate the types of some cars automatically and check the results to guarantee the accuracy, but it may exist some mistakes, we believe most of the types of cars is right. We will update the results of 171206_034636094_Camera_5 soon.

from dataset-api.

XBSong avatar XBSong commented on August 18, 2024

@stevenwudi , Hi, we have updated the result of 171206_034636094_Camera_5, you can re-download the dataset if it is needed. Thanks.

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

@pengwangucla @XibinSong Hi, Yes. I think the mesh problem is solved. However, there is a new issue in term of using camera intrinsic in the code.
I have made an issue in #3

from dataset-api.

Jerrypiglet avatar Jerrypiglet commented on August 18, 2024

@pengwangucla @XibinSong Hi, Yes. I think the mesh problem is solved. However, there is a new issue in term of using camera intrinsic in the code.
I have made an issue in #3

Hey Di, did you get a chance to fix the GLEW issue of the renderer? @stevenwudi

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

@Jerrypiglet Hi Jerry. No, unfortunately I didn't manage to fix the GLEW issue. So in the end, I didn't use the provided render at all. Cheers

from dataset-api.

ShreyasSkandan avatar ShreyasSkandan commented on August 18, 2024

It seems init GLEW is not necessary for successfully render a depth map. I have already update to rm the line of ERROR message. You may check it out again.

I'm trying to use proj_point_cloud.py in self_localization but I'm getting the same "Failed to initialize GLEW" problem. You mention that I should be able to generate a depth image without the renderer, could you provide more details regarding this? Thanks.

from dataset-api.

pengwangucla avatar pengwangucla commented on August 18, 2024

I'm trying to use proj_point_cloud.py in self_localization but I'm getting the same "Failed to initialize GLEW" problem. You mention that I should be able to generate a depth image without the renderer, could you provide more details regarding this? Thanks.

I was using ubuntu 14.04, haven't test with other platform. Maybe need deeper debug : P

from dataset-api.

ShreyasSkandan avatar ShreyasSkandan commented on August 18, 2024

I'm trying to use proj_point_cloud.py in self_localization but I'm getting the same "Failed to initialize GLEW" problem. You mention that I should be able to generate a depth image without the renderer, could you provide more details regarding this? Thanks.

I was using ubuntu 14.04, haven't test with other platform. Maybe need deeper debug : P

Oh, alright. Thanks anyway!

from dataset-api.

ThoBaro avatar ThoBaro commented on August 18, 2024

I solved the GLEW error on Ubuntu 16.04 / python 2 using a specific nvidia driver version. GLEW throws an error since the context is not properly initialized, which is caused by a bug/missing piece in the newer nvidia drivers. Solution from this post:

from console mode (ctrl + alt + F1) with stopped lightdm (sudo service lightdm stop)
$ sudo apt purge *nvidia*
$ # WARNING: The following driver version is a potential risk to your system since it's deprecated
$ wget https://launchpad.net/ubuntu/+archive/primary/+files/nvidia-graphics-drivers-384_384.90.orig.tar.gz # 
$ tar xzf nvidia-graphics-drivers-384_384.90.orig.tar.gz
$ cd nvidia-graphics-drivers-384_384.90
$ chmod u+x NVIDIA-Linux-x86_64-384.90-no-compat32.run
$ sudo ./NVIDIA-Linux-x86_64-384.90-no-compat32.run

After that i was able to run the example script.

from dataset-api.

ThoBaro avatar ThoBaro commented on August 18, 2024

Nevertheless, after some examination, we found out that some car_id are rather different than the real car models, for example the closest car and the car under 40km/h sign (orientation issue) in (171206_034636094_Camera_5), is it normal?
171206_034636094_camera_5-min

@stevenwudi: Was this pose problem fixed for you? I downloaded the recent version and still have this issue. Also the car left to the problematic car is turned upside down:
example

I'm glad for any help!

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

Nevertheless, after some examination, we found out that some car_id are rather different than the real car models, for example the closest car and the car under 40km/h sign (orientation issue) in (171206_034636094_Camera_5), is it normal?
171206_034636094_camera_5-min

@stevenwudi: Was this pose problem fixed for you? I downloaded the recent version and still have this issue. Also the car left to the problematic car is turned upside down:
example

I'm glad for any help!

Well, I didn't use the provided rendering tool but using the script that I have provided above. I believe the mistakes in the car rotations are only few so it does not matter much. But from you low resolution image, it seems that one car (deep blue) is upside down, that does not seem correct.

from dataset-api.

ThoBaro avatar ThoBaro commented on August 18, 2024

Well, I didn't use the provided rendering tool but using the script that I have provided above. I believe the mistakes in the car rotations are only few so it does not matter much. But from you low resolution image, it seems that one car (deep blue) is upside down, that does not seem correct.

@stevenwudi: Okay, thanks for the feedback! I'm very confused about the flipped car. I'm also using my own code with a render engine that uses a rotation matrix as an input. I build it from the euler angles and the math seems correct. I even redownloaded the labels. But since the dataset was updated after your rendering can you do me the favor and compare the labels files? Here's the current one.

171206_034636094_Camera_5.json.txt

from dataset-api.

stevenwudi avatar stevenwudi commented on August 18, 2024

@Tbarowski , Here is favor:
Using your provided json file, we rendered the mesh as below which is different from what you get.
I have cleaned up a script as:
https://github.com/stevenwudi/6DVNET/blob/master/tools/ApolloScape_car_instance/demo_mesh.py
You probably only need the files from:
https://github.com/stevenwudi/6DVNET/tree/master/tools/ApolloScape_car_instance

Moreover, our team has visually screened all the training and validation images and manually circle the corrupted images out. Here is the list:
https://github.com/stevenwudi/6DVNET/tree/master/tools/ApolloScape_car_instance/split_delete

Best
Di Wu
171206_034636094_camera_5

from dataset-api.

ThoBaro avatar ThoBaro commented on August 18, 2024

@Tbarowski , Here is favor:
Using your provided json file, we rendered the mesh as below which is different from what you get.
I have cleaned up a script as:
https://github.com/stevenwudi/6DVNET/blob/master/tools/ApolloScape_car_instance/demo_mesh.py
You probably only need the files from:
https://github.com/stevenwudi/6DVNET/tree/master/tools/ApolloScape_car_instance

Moreover, our team has visually screened all the training and validation images and manually circle the corrupted images out. Here is the list:
https://github.com/stevenwudi/6DVNET/tree/master/tools/ApolloScape_car_instance/split_delete

Best
Di Wu

Thanks for all your efforts!

Unfortunately I'm getting a different result with your code:
171206_034636094_camera_5_shrinked

Also, i am not able to run your code directly since I'm missing models as reported in #13. I'm using the models from 3d_car_instance_sample.tar.gz.

Is there any other source for the models? I'm quite sure that the error lies in the models.
How many models do you have / do you have files for the models that I miss (see other ticket).

from dataset-api.

patrick-llgc avatar patrick-llgc commented on August 18, 2024

the current models are definitely wrong on their website. Thanks @stevenwudi for providing the extra source of models!

from dataset-api.

lpeng8910 avatar lpeng8910 commented on August 18, 2024

Moreover, our team has visually screened all the training and validation images and manually circle the corrupted images out. Here is the list:

Hi @stevenwudi, a few years have passed and it seems that the links are broken now. It would be a big help for my current project to have the list of images with wrong car models circled out. Best, Lisa

from dataset-api.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.